Core i9 13900K Linux Distros

Intel Core i9-13900K testing with a ASUS PRIME Z790-P WIFI (0602 BIOS) and AMD Radeon RX 6800 XT 16GB on Clear Linux OS 37600 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211066-NE-DISTROS7610
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 4 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 8 Tests
CPU Massive 23 Tests
Creator Workloads 19 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 3 Tests
Cryptography 4 Tests
Database Test Suite 4 Tests
Desktop Graphics 2 Tests
Encoding 6 Tests
Finance 2 Tests
Fortran Tests 4 Tests
Game Development 3 Tests
HPC - High Performance Computing 17 Tests
Imaging 4 Tests
Java 3 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 6 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 3 Tests
Multi-Core 25 Tests
Node.js + NPM Tests 3 Tests
NVIDIA GPU Compute 6 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 8 Tests
Programmer / Developer System Benchmarks 8 Tests
Python 3 Tests
Renderers 5 Tests
Scientific Computing 4 Tests
Software Defined Radio 2 Tests
Server 9 Tests
Server CPU Tests 19 Tests
Single-Threaded 6 Tests
Video Encoding 5 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Ubuntu 22.10
November 02 2022
  1 Day, 1 Hour, 49 Minutes
Clear Linux
November 05 2022
  14 Hours, 3 Minutes
Invert Hiding All Results Option
  19 Hours, 56 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 13900K Linux DistrosProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionUbuntu 22.10Clear LinuxIntel Core i9-13900K (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB1000GB Western Digital WDS100T1X0E-00AFY0AMD Radeon RX 6800 XT 16GB (2575/1000MHz)Realtek ALC897ASUS VP28URealtek RTL8125 2.5GbE + Intel Device 7a70Ubuntu 22.105.19.0-23-generic (x86_64)GNOME Shell 43.0X Server + Wayland4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)1.3.224GCC 12.2.0ext43840x2160Clear Linux OS 376006.0.7-1207.native (x86_64)X Server 1.21.1.44.6 Mesa 22.3.0-devel (LLVM 14.0.6 DRM 3.48)1.3.230GCC 12.2.1 20221031 releases/gcc-12.2.0-182-gfaac1fccd7 + Clang 14.0.6 + LLVM 14.0.6OpenBenchmarking.orgKernel Details- Ubuntu 22.10: Transparent Huge Pages: madvise- Clear Linux: Transparent Huge Pages: alwaysCompiler Details- Ubuntu 22.10: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Clear Linux: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=skylake-avx512 --with-zstd Processor Details- Ubuntu 22.10: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x10e - Thermald 2.5.1- Clear Linux: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10e - Thermald 2.5.1Graphics Details- BAR1 / Visible vRAM Size: 16368 MB - vBIOS Version: 113-D4120500-101Java Details- Ubuntu 22.10: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu1)- Clear Linux: OpenJDK Runtime Environment (build 18.0.2-internal+0-adhoc.mockbuild.corretto-18-18.0.2.9.1)Python Details- Ubuntu 22.10: Python 3.10.7- Clear Linux: Python 3.11.0Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected Environment Details- Clear Linux: FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags" MESA_GLSL_CACHE_DISABLE=0 FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx""

Ubuntu 22.10 vs. Clear Linux ComparisonPhoronix Test SuiteBaseline+59.2%+59.2%+118.4%+118.4%+177.6%+177.6%236.8%186.7%107.4%65.7%60%48%46.6%45.9%45.7%44.1%43.2%39.9%38.7%37.7%32.7%31.5%30%29.9%29.5%29.3%28.3%25.3%24.7%21.8%21.6%21.6%21.2%20.5%20.1%19.7%19.6%19.3%19.1%19.1%18.2%16.7%16.5%15.5%14.4%12.9%12.8%12.1%12.1%12.1%11.3%10.8%10.3%10.2%9.9%9.7%9.4%9.3%8.8%8.7%8.3%7.8%7.5%7.5%7.5%7.2%7.1%7.1%7%6.6%6.3%6.3%6.2%6.2%6.2%6.2%6.2%5.9%5.9%5.8%5.8%5.5%5.4%5.3%5%5%4.7%4.7%4.7%4.6%4.1%4.1%3.9%3.6%3.6%3.6%3.6%3.5%3.3%3.3%3.2%3.1%2.9%2.6%2.5%2.5%2.4%2.4%2.3%2.2%2.2%2%2%ArcFace ResNet-100 - CPU - StandardS.V.M.PP.B.Sgoraytracenbodypython_startuppickle_pure_pythonH2floatcrypto_pyaeschaosP.P.BM.M.B.S.T - f32 - CPUsuper-resolution-10 - CPU - StandardCPU - BedroomSocket ActivityMalloc2to3regex_compiledjango_templateMemory CopyingCryptoIP Shapes 3D - f32 - CPUJPEG - 90JPEG - 80PNG - 90PNG - 80Scala DottyContext SwitchingpathlibA.S.Pyolov4 - CPU - StandardT.F.A.T.T17.9%8 - 256 - 57Redis - 50 - 1:1Repo OpenMP15.7%Savina Reactors.IOMEMFDGPT-2 - CPU - StandardC.S.TF.H.RD.B.s - f32 - CPUTime To Compileallmodconfig11.4%G.Q.D.S19, Long Mode - Compression SpeedMagi10.7%Redis - 50 - 1:10Timed Time - Size 1,000Default10.2%JythonG.A.U.J.FPNG - 100C240 BuckyballDefault3840 x 2160 - UltraBonds OpenMP8.4%Blake-2 S8.3%Q.1.LEP.C8.2%Forking8.2%Ringcoin8%19 - D.SMMAPApache Spark ALSRand ForestCPU Cache7.3%3840 x 2160 - HighMutex19, Long Mode - D.S1920 x 1080 - HighGarlicoin6.7%Q.1.L.H.C1.R.W.A.D.T.R1920 x 1080 - Ultra19 - Compression SpeedQuality 100libx265 - Livelibx265 - LiveD.R6.1%O.S6%libx265 - Platformlibx265 - Platformlibx265 - Video On Demandlibx265 - Video On DemandA.U.C.TFT.C5.4%Material Tester5.4%1 - Bosphorus 4K1.R.W.A.D.F.R.C.CFutex5.2%Apache Spark BayesALS Movie LensEP.D4.9%libx265 - Uploadlibx265 - Upload1.R.W.A.D.S.RSP.B4.7%SHA256IP Shapes 1D - f32 - CPU4.5%R.N.N.T - f32 - CPUOpenMP LavaMDT.B.TR.B.P.L.O3.7%bertsquad-12 - CPU - StandardRT.ldr_alb_nrm.3840x21603.6%json_loadsQ.1.H.CNUMAPreset 12 - Bosphorus 4KOpenMP - BM23.4%OpenMP - BM23.4%OpenMP LeukocyteIS.DSemaphores3.3%VMAF Optimized - Bosphorus 4Kdefconfig3.2%Redis - 50 - 10:1BT.C3%JPEG - 100Barbershop - CPU-Only2.6%LBC, LBRY CreditsLU.C2.5%OpenMP HotSpot3DSkeincoinlibx264 - Platformlibx264 - PlatformP.S.O - Bosphorus 4KPreset 4 - Bosphorus 4Klibx264 - Video On Demandlibx264 - Video On DemandONNX RuntimeStress-NGPHPBenchPyPerformancePyPerformancePyPerformancePyPerformancePyPerformanceDaCapo BenchmarkPyPerformancePyPerformancePyPerformanceLibRawoneDNNONNX RuntimeIndigoBenchStress-NGStress-NGPyPerformancePyPerformancePyPerformanceNode.js Express HTTP Load TestStress-NGStress-NGoneDNNJPEG XL libjxlJPEG XL libjxlJPEG XL libjxlJPEG XL libjxlRenaissanceStress-NGPyPerformanceRenaissanceONNX RuntimePyBenchHigh Performance Conjugate GradientLiquid-DSPmemtier_benchmarkFinanceBenchRenaissanceStress-NGONNX Runtimectx_clockRenaissanceoneDNNTimed Wasmer CompilationTimed Linux Kernel CompilationStress-NGZstd CompressionCpuminer-Optmemtier_benchmarkSQLite SpeedtestTimed CPython CompilationDaCapo BenchmarkRenaissanceJPEG XL libjxlNWChemWebP Image EncodeUnvanquishedFinanceBenchCpuminer-OptWebP Image EncodeNAS Parallel BenchmarksStress-NGCpuminer-OptZstd CompressionStress-NGRenaissanceRenaissanceStress-NGUnvanquishedStress-NGZstd CompressionUnvanquishedCpuminer-OptWebP Image EncodeQuantLibClickHouseUnvanquishedZstd CompressionWebP Image EncodeFFmpegFFmpeg7-Zip CompressionRodiniaFFmpegFFmpegFFmpegFFmpegRenaissanceNAS Parallel BenchmarksAppleseedSVT-HEVCClickHouseStress-NGRenaissanceRenaissanceNAS Parallel BenchmarksFFmpegFFmpegClickHouseNAS Parallel BenchmarksOpenSSLoneDNNoneDNNRodiniaRawTherapeeTimed CPython CompilationONNX RuntimeIntel Open Image DenoisePyPerformanceWebP Image EncodeStress-NGSVT-AV1miniBUDEminiBUDERodiniaNAS Parallel BenchmarksStress-NGSVT-VP9Timed Linux Kernel Compilationmemtier_benchmarkNAS Parallel BenchmarksJPEG XL libjxlBlenderCpuminer-OptNAS Parallel BenchmarksRodiniaCpuminer-OptFFmpegFFmpegSVT-VP9Node.js V8 Web Tooling BenchmarkSVT-AV1FFmpegFFmpegUbuntu 22.10Clear Linux

Core i9 13900K Linux Distrosnwchem: C240 Buckyballblender: Barbershop - CPU-Onlyopenvkl: vklBenchmark ISPCbuild-linux-kernel: allmodconfigtensorflow: CPU - 256 - ResNet-50memtier-benchmark: Redis - 50 - 1:1onnx: ArcFace ResNet-100 - CPU - Standardonnx: GPT-2 - CPU - Standardtensorflow: CPU - 512 - GoogLeNethpcg: minibude: OpenMP - BM2minibude: OpenMP - BM2jpegxl: JPEG - 100jpegxl: PNG - 100memtier-benchmark: Redis - 50 - 10:1openradioss: Cell Phone Drop Testmemtier-benchmark: Redis - 50 - 1:10ospray-studio: 3 - 4K - 32 - Path Tracerindigobench: CPU - Supercaropenradioss: Bird Strike on Windshieldopenssl: SHA256blender: Pabellon Barcelona - CPU-Onlyffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadspark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timeclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacherenaissance: ALS Movie Lensospray-studio: 1 - 4K - 32 - Path Tracerhammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 100hammerdb-mariadb: 64 - 100hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 250hammerdb-mariadb: 32 - 250hammerdb-mariadb: 8 - 100hammerdb-mariadb: 8 - 100hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 100hammerdb-mariadb: 16 - 100hammerdb-mariadb: 8 - 250hammerdb-mariadb: 8 - 250renaissance: Genetic Algorithm Using Jenetics + Futuresospray-studio: 3 - 4K - 1 - Path Tracerstress-ng: Atomicstress-ng: CPU Cacherodinia: OpenMP HotSpot3Dospray-studio: 3 - 1080p - 32 - Path Tracerblender: Classroom - CPU-Onlyjava-gradle-perf: Reactorrenaissance: Akka Unbalanced Cobwebbed Treestress-ng: Futexstress-ng: Socket Activitygromacs: MPI CPU - water_GMX50_baretensorflow: CPU - 256 - GoogLeNetpolyhedron: fatigue2build-nodejs: Time To Compileffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadospray-studio: 1 - 1080p - 32 - Path Tracerffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx265 - Platformffmpeg: libx265 - Platformonnx: fcn-resnet101-11 - CPU - Standardpolyhedron: tfft2onnx: bertsquad-12 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: super-resolution-10 - CPU - Standardfinancebench: Bonds OpenMPrenaissance: Apache Spark PageRankopenradioss: Bumper Beamospray-studio: 3 - 1080p - 1 - Path Tracerffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandopenradioss: Rubber O-Ring Seal Installationffmpeg: libx264 - Platformffmpeg: libx264 - Platformappleseed: Emilyospray-studio: 1 - 1080p - 1 - Path Tracertensorflow: CPU - 512 - AlexNetxmrig: Monero - 1Mpolyhedron: gas_dyn2svt-hevc: 1 - Bosphorus 4Krenaissance: Scala Dottypyperformance: python_startupstress-ng: Context Switchingstress-ng: Glibc C String Functionsnpb: SP.Ctensorflow: CPU - 64 - ResNet-50nginx: 1000nginx: 500nginx: 200nginx: 100cpuminer-opt: Blake-2 Sstress-ng: Mutexstress-ng: Cryptofinancebench: Repo OpenMProdinia: OpenMP LavaMDjpegxl: JPEG - 80jpegxl: PNG - 80warsow: 1920 x 1080onednn: Deconvolution Batch shapes_1d - f32 - CPUwarsow: 3840 x 2160cpuminer-opt: Skeincoindeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - f32 - CPUblender: Fishy Cat - CPU-Onlydeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - f32 - CPUospray-studio: 1 - 4K - 1 - Path Tracerv-ray: CPUjpegxl: JPEG - 90jpegxl: PNG - 90openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUxonotic: 3840 x 2160 - Ultimaterenaissance: Apache Spark ALSindigobench: CPU - Bedroomxonotic: 1920 x 1080 - Ultimaterenaissance: Finagle HTTP Requestsappleseed: Material Testerxmrig: Wownero - 1Mopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUstress-ng: Vector Mathopenssl: RSA4096openssl: RSA4096npb: BT.Cbuild-python: Released Build, PGO + LTO Optimizednpb: FT.Cappleseed: Disney Materialcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingtensorflow: CPU - 256 - AlexNetsvt-av1: Preset 4 - Bosphorus 4Koidn: RT.ldr_alb_nrm.3840x2160onednn: IP Shapes 1D - f32 - CPUrenaissance: Savina Reactors.IOblender: BMW27 - CPU-Onlyxonotic: 3840 x 2160 - Ultraxonotic: 1920 x 1080 - Ultrarodinia: OpenMP Leukocytepolyhedron: channel2minibude: OpenMP - BM1minibude: OpenMP - BM1tensorflow: CPU - 32 - ResNet-50npb: EP.Ddeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamstress-ng: CPU Stressdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamnode-web-tooling: namd: ATPase Simulation - 327,506 Atomscompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedpolyhedron: mp_prop_designbuild-linux-kernel: defconfignpb: IS.Ddeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamstargate: 96000 - 512stargate: 96000 - 1024npb: LU.Cffmpeg: libx265 - Liveffmpeg: libx265 - Livedeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speeddeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Calculate Pi Benchmarkchia-vdf: Square Plain C++stress-ng: Memory Copyingchia-vdf: Square Assembly Optimizedtensorflow: CPU - 64 - GoogLeNetrawtherapee: Total Benchmark Timepyperformance: django_templatetesseract: 1920 x 1080pyhpc: CPU - Numpy - 16384 - Isoneutral Mixingtesseract: 3840 x 2160sqlite-speedtest: Timed Time - Size 1,000stress-ng: x86_64 RdRandpolyhedron: test_fpu2stress-ng: System V Message Passingcpuminer-opt: x25xcpuminer-opt: Garlicoinstress-ng: Forkingcpuminer-opt: scryptstress-ng: MMAPstress-ng: Glibc Qsort Data Sortingstress-ng: IO_uringstress-ng: NUMAstress-ng: Mallocstress-ng: SENDFILEstress-ng: MEMFDstress-ng: Matrix Mathstress-ng: Semaphoresopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timecpuminer-opt: Deepcoincpuminer-opt: Magicpuminer-opt: LBC, LBRY Creditscpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: Ringcoinpyperformance: raytracestargate: 44100 - 512build-wasmer: Time To Compilesrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMstargate: 44100 - 1024pyperformance: floatpyperformance: chaospyperformance: regex_compileffmpeg: libx264 - Liveffmpeg: libx264 - Livepyperformance: pickle_pure_pythonpyperformance: 2to3webp: Quality 100, Lossless, Highest Compressionpyperformance: nbodydacapobench: H2pyperformance: pathlibpyperformance: gorodinia: OpenMP CFD Solversrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMtensorflow: CPU - 16 - ResNet-50rodinia: OpenMP Streamclustersrsran: OFDM_Testpolyhedron: induct2renaissance: Apache Spark Bayesencodec: 24 kbpspyperformance: crypto_pyaesspacy: en_core_web_lgspacy: en_core_web_trfpyperformance: json_loadsliquid-dsp: 8 - 256 - 57renaissance: Rand Forestencodec: 6 kbpsencodec: 3 kbpsencodec: 1.5 kbpssvt-vp9: VMAF Optimized - Bosphorus 4Krenaissance: In-Memory Database Shootoutnpb: CG.Csvt-hevc: 10 - Bosphorus 4Ktensorflow: CPU - 32 - GoogLeNetnpb: SP.Bpolyhedron: rnflowunvanquished: 3840 x 2160 - Ultrapolyhedron: doducunvanquished: 1920 x 1080 - Ultraunvanquished: 3840 x 2160 - Hightensorflow: CPU - 64 - AlexNetunvanquished: 1920 x 1080 - Highlibraw: Post-Processing Benchmarkquantlib: polyhedron: capacitaaom-av1: Speed 6 Realtime - Bosphorus 4Kcloudsuite-ga: srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMpolyhedron: proteinwebp: Quality 100, Losslesspolyhedron: acpybench: Total For Average Test Timessrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMpolyhedron: airaom-av1: Speed 8 Realtime - Bosphorus 4Kphpbench: PHP Benchmark Suiteonednn: IP Shapes 3D - f32 - CPUtensorflow: CPU - 32 - AlexNetnode-express-loadtest: pyhpc: CPU - Numpy - 16384 - Equation of Statesvt-av1: Preset 8 - Bosphorus 4Ktensorflow: CPU - 16 - GoogLeNetaom-av1: Speed 9 Realtime - Bosphorus 4Kdacapobench: Tradesoapaom-av1: Speed 10 Realtime - Bosphorus 4Knpb: MG.Cpolyhedron: aermodcloudsuite-ma: svt-hevc: 7 - Bosphorus 4Kpolyhedron: mdbxonednn: Convolution Batch Shapes Auto - f32 - CPUtensorflow: CPU - 16 - AlexNetsvt-vp9: Visual Quality Optimized - Bosphorus 4Kwebp: Quality 100, Highest Compressionsvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 4Kpolyhedron: linpkbuild-python: Defaultdacapobench: Tradebeansdacapobench: Jythonsvt-av1: Preset 12 - Bosphorus 4Konednn: Deconvolution Batch shapes_3d - f32 - CPUnpb: EP.Cwebp: Quality 100webp: Defaultctx-clock: Context Switch Timenginx: 1Ubuntu 22.10Clear Linux4268.4576.35161454.88037.063124943.7259810578109.3410.154922.825570.6231.051.063127334.7568.103456218.1818757911.802183.7535956999233178.9519.48129.6191382872.032.440.793.2652.241.090.95308.97307.65301.667650.3157185893323844090768390638831538008828613568289121382368654137140862773718272163310381063.35758344361.7598.7749.66046495147.64142.7987183.43538590.3124287.071.414109.1021.88256.70131.9679.003903765.12116.3265.14116.3013312.11210687684230942.0071621902.7114.66147376.3299.25108.6276.2999.29164.9399961242264.599652.525.835.79454.37.3914703175.324307014.9415473.9037.68192021.95203069.78205841.24204910.4676558716748823.8842378.7919315.82942784.64513.2513.58965.87.63132951.3156343967.893312.0497964.435512.18562150.8375.32227.141452.54401112.8248312873413.0913.432222.923.582238.023.551570.465.08527.85715442026.44.051540.60370831992.690.77577816463.2438.2318.23125.9763.4810.96728.9151.14468.508.72916.0521.43372.8514.631638.990.7233018.171.6414593.71119832.03358806.75496.949771.29171.50222382.8184.859545139981182153256.553.0040.571.904304193.851.61692.0609456696.928939249.97629.316.584414.59138.513049.53236.601550.395390.049511.104790.480911.051851634.5424.797040.3232115.3492103.896626.330.609824758.380.525.7741.3961263.16159.940974.686031.070632.18224.8184254.85469553311.89182.0827.7412.492580.02834887.750.978.2359153.127317.590056.832910.349796.58601.4928361.990.770.931.023.452.6351.762529337385.39269067111.2032.34622.2999.44260.004896.016832.80682767.7613.9913432520.511128.003496.70113514.43333.58742.41411.3927676.33681.8036241645.59588014.942049.37109789.423538392.4127.343094151.34484185201176.02525501986804344605423.942086.14535330.245199.2677.56.42283546.744.982.1353.2414.301971580.9161.420918.771126.168182.0624.939.707.29719560000011.07693.821.85550.720855252311.5859746667384.519.30319.18218.548142.191957.98583.03202.12113.1622786.609.54664.73.38671.3665.9235.17683.970.325198.75.1344.879985107.6224.3242.6683.76.932.303.77474233.3633.10.9366.0516175964.07923206.94181470.00177.256117.3985.34163886.5224905.502.7710160105.393.025.77228162.47123.495.00155.19147.8671.3412.02916891710216.5173.443473262.4216.1624.981323905.5591.51162506.5743639054.062014119428.6154722.072551.7941.081.163223992.9167.933811036.8711.988184.9637622237897178.7919.84127.291416012328.44322.07317.577288.7969.4341748.6292.0548.431146.416805.93363240.6131581.601.40133.4775.4468.89109.9669.01109.771331254818907933544.8910161597.8113.5277.8597.31108.0078.1296.96162.8295966.10379.65.0417592139.664366568.3115318.8370662017946052.0451626.5922344.25820381.32216.0616.31953.36.808361601812065.8175.331132.542858715.9216.181885.45.3281777.595.656227118329.56358535.25428.948313.69177.80221228.7583.8174011319941818053.0690.551.989413631.051.5848.36916.627415.6842906.9451871.9126.900.621535127.185.542.7021304.9651989.89193.2826.135235.956.41.084469212.0631.11917.3998.32400.004909.874829.75782742.7938506364.421138.853275.84104947.27337.82798.15457.7927808.53706.1647064332.51595183.172343.51110071.383426675.99188701061.90538972002874366635020.1913026.99132.432.163.5358.5914.081351220.9741.514357.3567.66.2277.732660.935.411.11003120000357.8146.718580.50201.7121769.33722.6713.0713.9731.897.565528.42.4940133554233.35354227350.00176.30224835.87106.105.75329121.675.18158.73149.92313.2591556224.0383.393323015.6817.1627.17117OpenBenchmarking.org

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballClear LinuxUbuntu 22.1090018002700360045003905.54268.4-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lz1. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lcomex -lm -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyClear LinuxUbuntu 22.10130260390520650SE +/- 0.53, N = 3SE +/- 0.42, N = 3591.51576.35
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyClear LinuxUbuntu 22.10100200300400500Min: 590.82 / Avg: 591.51 / Max: 592.55Min: 575.84 / Avg: 576.35 / Max: 577.19

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCClear LinuxUbuntu 22.104080120160200SE +/- 1.61, N = 12SE +/- 0.67, N = 3162161MIN: 10 / MAX: 1893MIN: 11 / MAX: 1931
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCClear LinuxUbuntu 22.10306090120150Min: 151 / Avg: 161.75 / Max: 171Min: 160 / Avg: 161.33 / Max: 162

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigClear LinuxUbuntu 22.10110220330440550SE +/- 0.35, N = 3SE +/- 0.43, N = 3506.57454.88
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigClear LinuxUbuntu 22.1090180270360450Min: 505.97 / Avg: 506.57 / Max: 507.18Min: 454.21 / Avg: 454.88 / Max: 455.67

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.02, N = 337.06
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 22.10816243240Min: 37.03 / Avg: 37.06 / Max: 37.09

Device: CPU - Batch Size: 256 - Model: ResNet-50

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Clear LinuxUbuntu 22.10800K1600K2400K3200K4000KSE +/- 46119.45, N = 15SE +/- 65972.61, N = 153639054.063124943.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Clear LinuxUbuntu 22.10600K1200K1800K2400K3000KMin: 3438064.74 / Avg: 3639054.06 / Max: 4030627.51Min: 2798750.1 / Avg: 3124943.72 / Max: 3426413.941. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.10400800120016002000SE +/- 243.47, N = 12SE +/- 0.17, N = 32014598-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.10400800120016002000Min: 615.5 / Avg: 2014.04 / Max: 2499Min: 597.5 / Avg: 597.67 / Max: 5981. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.103K6K9K12K15KSE +/- 93.02, N = 12SE +/- 19.87, N = 31194210578-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.102K4K6K8K10KMin: 10926 / Avg: 11941.92 / Max: 12096Min: 10556.5 / Avg: 10578.33 / Max: 106181. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.36, N = 3109.34
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetUbuntu 22.1020406080100Min: 108.63 / Avg: 109.34 / Max: 109.84

Device: CPU - Batch Size: 512 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Clear LinuxUbuntu 22.103691215SE +/- 0.02143, N = 3SE +/- 0.01984, N = 38.6154710.15490-lmpi_cxx1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Clear LinuxUbuntu 22.103691215Min: 8.58 / Avg: 8.62 / Max: 8.65Min: 10.12 / Avg: 10.15 / Max: 10.191. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Clear LinuxUbuntu 22.10510152025SE +/- 0.12, N = 3SE +/- 0.05, N = 322.0722.831. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Clear LinuxUbuntu 22.10510152025Min: 21.91 / Avg: 22.07 / Max: 22.32Min: 22.73 / Avg: 22.82 / Max: 22.911. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Clear LinuxUbuntu 22.10120240360480600SE +/- 3.10, N = 3SE +/- 1.24, N = 3551.79570.621. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Clear LinuxUbuntu 22.10100200300400500Min: 547.85 / Avg: 551.79 / Max: 557.92Min: 568.35 / Avg: 570.62 / Max: 572.631. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Clear LinuxUbuntu 22.100.2430.4860.7290.9721.215SE +/- 0.01, N = 3SE +/- 0.00, N = 31.081.05-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Clear LinuxUbuntu 22.10246810Min: 1.07 / Avg: 1.08 / Max: 1.09Min: 1.04 / Avg: 1.05 / Max: 1.051. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Clear LinuxUbuntu 22.100.2610.5220.7831.0441.305SE +/- 0.00, N = 3SE +/- 0.00, N = 31.161.06-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Clear LinuxUbuntu 22.10246810Min: 1.16 / Avg: 1.16 / Max: 1.17Min: 1.06 / Avg: 1.06 / Max: 1.071. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Clear LinuxUbuntu 22.10700K1400K2100K2800K3500KSE +/- 27586.98, N = 15SE +/- 38170.43, N = 43223992.913127334.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Clear LinuxUbuntu 22.10600K1200K1800K2400K3000KMin: 3092208.2 / Avg: 3223992.91 / Max: 3523307Min: 3032636.97 / Avg: 3127334.75 / Max: 3218563.921. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestClear LinuxUbuntu 22.101530456075SE +/- 0.51, N = 15SE +/- 0.98, N = 367.9368.10
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestClear LinuxUbuntu 22.101326395265Min: 63.45 / Avg: 67.93 / Max: 70.59Min: 66.15 / Avg: 68.1 / Max: 69.2

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Clear LinuxUbuntu 22.10800K1600K2400K3200K4000KSE +/- 30336.93, N = 15SE +/- 41614.46, N = 33811036.873456218.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Clear LinuxUbuntu 22.10700K1400K2100K2800K3500KMin: 3606443.66 / Avg: 3811036.87 / Max: 4016242.4Min: 3374885.73 / Avg: 3456218.18 / Max: 3512183.61. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1040K80K120K160K200KSE +/- 169.22, N = 31875791. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KMin: 187374 / Avg: 187579.33 / Max: 1879151. (CXX) g++ options: -O3 -lm -ldl

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarClear LinuxUbuntu 22.103691215SE +/- 0.09, N = 15SE +/- 0.02, N = 311.9911.80
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarClear LinuxUbuntu 22.103691215Min: 11.68 / Avg: 11.99 / Max: 12.94Min: 11.78 / Avg: 11.8 / Max: 11.83

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldClear LinuxUbuntu 22.104080120160200SE +/- 1.10, N = 3SE +/- 2.45, N = 3184.96183.75
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldClear LinuxUbuntu 22.10306090120150Min: 182.76 / Avg: 184.96 / Max: 186.26Min: 178.93 / Avg: 183.75 / Max: 186.96

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Clear LinuxUbuntu 22.108000M16000M24000M32000M40000MSE +/- 14916651.04, N = 3SE +/- 90907848.90, N = 33762223789735956999233-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Clear LinuxUbuntu 22.107000M14000M21000M28000M35000MMin: 37593759900 / Avg: 37622237896.67 / Max: 37644175740Min: 35775687680 / Avg: 35956999233.33 / Max: 360593725801. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyClear LinuxUbuntu 22.104080120160200SE +/- 0.16, N = 3SE +/- 0.13, N = 3178.79178.95
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyClear LinuxUbuntu 22.10306090120150Min: 178.55 / Avg: 178.79 / Max: 179.08Min: 178.69 / Avg: 178.95 / Max: 179.09

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadClear LinuxUbuntu 22.10510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 319.8419.48-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadClear LinuxUbuntu 22.10510152025Min: 19.81 / Avg: 19.84 / Max: 19.88Min: 19.44 / Avg: 19.48 / Max: 19.511. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadClear LinuxUbuntu 22.10306090120150SE +/- 0.14, N = 3SE +/- 0.15, N = 3127.29129.62-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadClear LinuxUbuntu 22.1020406080100Min: 127 / Avg: 127.29 / Max: 127.44Min: 129.39 / Avg: 129.62 / Max: 129.891. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 22.100.45680.91361.37041.82722.284SE +/- 0.02, N = 152.03
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 22.10246810Min: 1.92 / Avg: 2.03 / Max: 2.16

Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeUbuntu 22.100.5491.0981.6472.1962.745SE +/- 0.01, N = 152.44
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeUbuntu 22.10246810Min: 2.36 / Avg: 2.44 / Max: 2.55

Row Count: 1000000 - Partitions: 500 - Group By Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeUbuntu 22.100.17780.35560.53340.71120.889SE +/- 0.01, N = 150.79
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeUbuntu 22.10246810Min: 0.71 / Avg: 0.79 / Max: 0.88

Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.73351.4672.20052.9343.6675SE +/- 0.04, N = 153.26
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 22.10246810Min: 3.06 / Avg: 3.26 / Max: 3.57

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.06, N = 1552.24
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkUbuntu 22.101020304050Min: 51.91 / Avg: 52.24 / Max: 52.63

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeUbuntu 22.100.24530.49060.73590.98121.2265SE +/- 0.01, N = 151.09
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeUbuntu 22.10246810Min: 1.02 / Avg: 1.09 / Max: 1.15

Row Count: 1000000 - Partitions: 500 - Repartition Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeUbuntu 22.100.21380.42760.64140.85521.069SE +/- 0.02, N = 150.95
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeUbuntu 22.10246810Min: 0.85 / Avg: 0.95 / Max: 1.19

Row Count: 1000000 - Partitions: 500 - Inner Join Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunClear LinuxUbuntu 22.1070140210280350SE +/- 0.38, N = 3SE +/- 1.54, N = 15328.44308.97MIN: 25.56 / MAX: 30000MIN: 24.68 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunClear LinuxUbuntu 22.1060120180240300Min: 327.69 / Avg: 328.44 / Max: 328.9Min: 300.2 / Avg: 308.97 / Max: 323.131. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunClear LinuxUbuntu 22.1070140210280350SE +/- 1.18, N = 3SE +/- 1.50, N = 15322.07307.65MIN: 25.5 / MAX: 30000MIN: 24.43 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunClear LinuxUbuntu 22.1060120180240300Min: 320.63 / Avg: 322.07 / Max: 324.41Min: 293.18 / Avg: 307.65 / Max: 314.361. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheClear LinuxUbuntu 22.1070140210280350SE +/- 4.54, N = 3SE +/- 2.14, N = 15317.57301.66MIN: 24.74 / MAX: 30000MIN: 24.67 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheClear LinuxUbuntu 22.1060120180240300Min: 310.62 / Avg: 317.57 / Max: 326.12Min: 275.67 / Avg: 301.66 / Max: 309.921. ClickHouse server version 22.5.4.19 (official build).

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensClear LinuxUbuntu 22.1016003200480064008000SE +/- 10.96, N = 3SE +/- 72.83, N = 37288.77650.3MIN: 7273.03 / MAX: 8022.2MIN: 7526.04 / MAX: 8473.18
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensClear LinuxUbuntu 22.1013002600390052006500Min: 7273.03 / Avg: 7288.66 / Max: 7309.77Min: 7526.04 / Avg: 7650.27 / Max: 7778.26

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KSE +/- 533.75, N = 31571851. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KMin: 156146 / Avg: 157185.33 / Max: 1579161. (CXX) g++ options: -O3 -lm -ldl

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K893321. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.108K16K24K32K40K384401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K907681. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.108K16K24K32K40K390631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K883151. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.108K16K24K32K40K380081. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K828611. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.108K16K24K32K40K356821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K891211. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.108K16K24K32K40K382361. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K865411. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.108K16K24K32K40K371401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K862771. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.108K16K24K32K40K371821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.1015K30K45K60K75K721631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.107K14K21K28K35K310381. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesClear LinuxUbuntu 22.102004006008001000SE +/- 2.62, N = 3SE +/- 8.08, N = 15969.41063.3MIN: 930.74 / MAX: 1016.31MIN: 960.83 / MAX: 1135.87
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesClear LinuxUbuntu 22.102004006008001000Min: 965.17 / Avg: 969.35 / Max: 974.18Min: 1007.22 / Avg: 1063.25 / Max: 1109.69

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1012002400360048006000SE +/- 5.24, N = 357581. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1010002000300040005000Min: 5751 / Avg: 5757.67 / Max: 57681. (CXX) g++ options: -O3 -lm -ldl

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicClear LinuxUbuntu 22.1070K140K210K280K350KSE +/- 10614.09, N = 15SE +/- 7871.09, N = 15341748.62344361.75-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicClear LinuxUbuntu 22.1060K120K180K240K300KMin: 300534.66 / Avg: 341748.62 / Max: 426989.82Min: 300970.12 / Avg: 344361.75 / Max: 401718.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheClear LinuxUbuntu 22.1020406080100SE +/- 1.44, N = 15SE +/- 1.23, N = 1592.0598.77-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheClear LinuxUbuntu 22.1020406080100Min: 83.59 / Avg: 92.05 / Max: 103.6Min: 89.65 / Avg: 98.77 / Max: 106.791. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DClear LinuxUbuntu 22.101122334455SE +/- 0.17, N = 3SE +/- 0.32, N = 1548.4349.661. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DClear LinuxUbuntu 22.101020304050Min: 48.1 / Avg: 48.43 / Max: 48.69Min: 48.54 / Avg: 49.66 / Max: 52.851. (CXX) g++ options: -O2 -lOpenCL

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1010K20K30K40K50KSE +/- 33.28, N = 3464951. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.108K16K24K32K40KMin: 46451 / Avg: 46494.67 / Max: 465601. (CXX) g++ options: -O3 -lm -ldl

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyClear LinuxUbuntu 22.10306090120150SE +/- 0.21, N = 3SE +/- 0.34, N = 3146.41147.64
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyClear LinuxUbuntu 22.10306090120150Min: 146.19 / Avg: 146.41 / Max: 146.82Min: 147.17 / Avg: 147.64 / Max: 148.3

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorUbuntu 22.10306090120150SE +/- 1.44, N = 6142.80
OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorUbuntu 22.10306090120150Min: 138.74 / Avg: 142.8 / Max: 149.19

Gradle Build: Reactor

Clear Linux: The test quit with a non-zero exit status.

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeClear LinuxUbuntu 22.1015003000450060007500SE +/- 40.50, N = 3SE +/- 20.79, N = 36805.97183.4MIN: 5063.5 / MAX: 6882.75MIN: 5471.82 / MAX: 7209.81
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeClear LinuxUbuntu 22.1012002400360048006000Min: 6745.35 / Avg: 6805.86 / Max: 6882.75Min: 7142.41 / Avg: 7183.43 / Max: 7209.81

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexClear LinuxUbuntu 22.10800K1600K2400K3200K4000KSE +/- 99609.06, N = 12SE +/- 33712.97, N = 153363240.613538590.31-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexClear LinuxUbuntu 22.10600K1200K1800K2400K3000KMin: 2615644.77 / Avg: 3363240.61 / Max: 3933344Min: 3318227.7 / Avg: 3538590.31 / Max: 3760667.381. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityClear LinuxUbuntu 22.107K14K21K28K35KSE +/- 483.37, N = 15SE +/- 568.16, N = 1231581.6024287.07-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityClear LinuxUbuntu 22.105K10K15K20K25KMin: 26599.36 / Avg: 31581.6 / Max: 33343.37Min: 21269.06 / Avg: 24287.07 / Max: 29385.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareClear LinuxUbuntu 22.100.31820.63640.95461.27281.591SE +/- 0.003, N = 3SE +/- 0.002, N = 31.4011.414-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareClear LinuxUbuntu 22.10246810Min: 1.4 / Avg: 1.4 / Max: 1.41Min: 1.41 / Avg: 1.41 / Max: 1.421. (CXX) g++ options: -O3

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.20, N = 3109.10
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetUbuntu 22.1020406080100Min: 108.73 / Avg: 109.1 / Max: 109.41

Device: CPU - Batch Size: 256 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Ubuntu 22.1051015202521.88

Benchmark: fatigue2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileUbuntu 22.1060120180240300SE +/- 0.04, N = 3256.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileUbuntu 22.1050100150200250Min: 256.63 / Avg: 256.7 / Max: 256.76

Time To Compile

Clear Linux: The test quit with a non-zero exit status.

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadClear LinuxUbuntu 22.10816243240SE +/- 0.05, N = 3SE +/- 0.04, N = 333.4731.96-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadClear LinuxUbuntu 22.10714212835Min: 33.39 / Avg: 33.47 / Max: 33.57Min: 31.89 / Avg: 31.96 / Max: 32.021. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadClear LinuxUbuntu 22.1020406080100SE +/- 0.12, N = 3SE +/- 0.09, N = 375.4479.00-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadClear LinuxUbuntu 22.101530456075Min: 75.2 / Avg: 75.44 / Max: 75.61Min: 78.85 / Avg: 79 / Max: 79.171. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.108K16K24K32K40KSE +/- 48.68, N = 3390371. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.107K14K21K28K35KMin: 38940 / Avg: 39037.33 / Max: 390881. (CXX) g++ options: -O3 -lm -ldl

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandClear LinuxUbuntu 22.101530456075SE +/- 0.12, N = 3SE +/- 0.08, N = 368.8965.12-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandClear LinuxUbuntu 22.101326395265Min: 68.65 / Avg: 68.89 / Max: 69.02Min: 65 / Avg: 65.12 / Max: 65.271. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandClear LinuxUbuntu 22.10306090120150SE +/- 0.19, N = 3SE +/- 0.14, N = 3109.96116.32-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandClear LinuxUbuntu 22.1020406080100Min: 109.74 / Avg: 109.96 / Max: 110.34Min: 116.05 / Avg: 116.32 / Max: 116.531. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformClear LinuxUbuntu 22.101530456075SE +/- 0.05, N = 3SE +/- 0.06, N = 369.0165.14-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformClear LinuxUbuntu 22.101326395265Min: 68.93 / Avg: 69.01 / Max: 69.1Min: 65.02 / Avg: 65.14 / Max: 65.21. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformClear LinuxUbuntu 22.10306090120150SE +/- 0.08, N = 3SE +/- 0.10, N = 3109.77116.30-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformClear LinuxUbuntu 22.1020406080100Min: 109.62 / Avg: 109.77 / Max: 109.89Min: 116.19 / Avg: 116.3 / Max: 116.51. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.10306090120150SE +/- 0.33, N = 3SE +/- 0.17, N = 3133133-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.1020406080100Min: 132 / Avg: 132.67 / Max: 133Min: 133 / Avg: 133.17 / Max: 133.51. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Ubuntu 22.10369121512.1

Benchmark: tfft2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.1030060090012001500SE +/- 9.64, N = 3SE +/- 0.60, N = 312541210-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.102004006008001000Min: 1235 / Avg: 1254.17 / Max: 1265.5Min: 1209 / Avg: 1210.17 / Max: 12111. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.102004006008001000SE +/- 0.44, N = 3SE +/- 0.29, N = 3818687-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.10140280420560700Min: 817 / Avg: 817.83 / Max: 818.5Min: 686 / Avg: 686.5 / Max: 6871. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.102K4K6K8K10KSE +/- 21.86, N = 3SE +/- 2.84, N = 390796842-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardClear LinuxUbuntu 22.1016003200480064008000Min: 9053.5 / Avg: 9079 / Max: 9122.5Min: 6838 / Avg: 6842 / Max: 6847.51. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPClear LinuxUbuntu 22.107K14K21K28K35KSE +/- 767.97, N = 15SE +/- 28.94, N = 333544.8930942.011. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPClear LinuxUbuntu 22.106K12K18K24K30KMin: 29802.43 / Avg: 33544.89 / Max: 36883.43Min: 30892.58 / Avg: 30942.01 / Max: 30992.791. (CXX) g++ options: -O3 -march=native -fopenmp

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankClear LinuxUbuntu 22.10400800120016002000SE +/- 19.33, N = 3SE +/- 15.45, N = 91597.81902.7MIN: 1448.14 / MAX: 1649.56MIN: 1741.39 / MAX: 1997.09
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankClear LinuxUbuntu 22.1030060090012001500Min: 1577.43 / Avg: 1597.83 / Max: 1636.48Min: 1857.25 / Avg: 1902.66 / Max: 1965.66

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamClear LinuxUbuntu 22.10306090120150SE +/- 1.03, N = 3SE +/- 0.19, N = 3113.52114.66
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamClear LinuxUbuntu 22.1020406080100Min: 111.68 / Avg: 113.52 / Max: 115.23Min: 114.33 / Avg: 114.66 / Max: 115

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 4.33, N = 314731. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500Min: 1466 / Avg: 1473.33 / Max: 14811. (CXX) g++ options: -O3 -lm -ldl

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandClear LinuxUbuntu 22.1020406080100SE +/- 0.01, N = 3SE +/- 0.07, N = 377.8576.32-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandClear LinuxUbuntu 22.101530456075Min: 77.83 / Avg: 77.85 / Max: 77.86Min: 76.21 / Avg: 76.32 / Max: 76.441. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandClear LinuxUbuntu 22.1020406080100SE +/- 0.01, N = 3SE +/- 0.09, N = 397.3199.25-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandClear LinuxUbuntu 22.1020406080100Min: 97.29 / Avg: 97.31 / Max: 97.32Min: 99.1 / Avg: 99.25 / Max: 99.391. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationClear LinuxUbuntu 22.1020406080100SE +/- 0.58, N = 3SE +/- 0.31, N = 3108.00108.62
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationClear LinuxUbuntu 22.1020406080100Min: 106.84 / Avg: 108 / Max: 108.6Min: 108.03 / Avg: 108.62 / Max: 109.09

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformClear LinuxUbuntu 22.1020406080100SE +/- 0.10, N = 3SE +/- 0.03, N = 378.1276.29-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformClear LinuxUbuntu 22.101530456075Min: 77.96 / Avg: 78.12 / Max: 78.3Min: 76.23 / Avg: 76.29 / Max: 76.341. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformClear LinuxUbuntu 22.1020406080100SE +/- 0.12, N = 3SE +/- 0.04, N = 396.9699.29-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformClear LinuxUbuntu 22.1020406080100Min: 96.74 / Avg: 96.96 / Max: 97.16Min: 99.23 / Avg: 99.29 / Max: 99.371. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyClear LinuxUbuntu 22.104080120160200162.83164.94

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 2.73, N = 312421. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.102004006008001000Min: 1238 / Avg: 1241.67 / Max: 12471. (CXX) g++ options: -O3 -lm -ldl

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.08, N = 3264.59
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetUbuntu 22.1050100150200250Min: 264.51 / Avg: 264.59 / Max: 264.75

Device: CPU - Batch Size: 512 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MUbuntu 22.102K4K6K8K10KSE +/- 65.64, N = 39652.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MUbuntu 22.102K4K6K8K10KMin: 9521.3 / Avg: 9652.53 / Max: 9721.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Variant: Monero - Hash Count: 1M

Clear Linux: The test quit with a non-zero exit status. E: xmrig: line 3: ./xmrig: No such file or directory

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Ubuntu 22.1061218243025.83

Benchmark: gas_dyn2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KClear LinuxUbuntu 22.10246810SE +/- 0.03, N = 3SE +/- 0.02, N = 36.105.79-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KClear LinuxUbuntu 22.10246810Min: 6.06 / Avg: 6.1 / Max: 6.15Min: 5.77 / Avg: 5.79 / Max: 5.821. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyClear LinuxUbuntu 22.10100200300400500SE +/- 1.90, N = 3SE +/- 6.59, N = 15379.6454.3MIN: 316.44 / MAX: 567.07MIN: 344.62 / MAX: 815.12
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyClear LinuxUbuntu 22.1080160240320400Min: 377.67 / Avg: 379.57 / Max: 383.37Min: 411.11 / Avg: 454.29 / Max: 474.75

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupClear LinuxUbuntu 22.10246810SE +/- 0.00, N = 3SE +/- 0.04, N = 35.047.39
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupClear LinuxUbuntu 22.103691215Min: 5.03 / Avg: 5.04 / Max: 5.04Min: 7.35 / Avg: 7.39 / Max: 7.47

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingClear LinuxUbuntu 22.104M8M12M16M20MSE +/- 234990.96, N = 15SE +/- 181309.12, N = 417592139.6614703175.32-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingClear LinuxUbuntu 22.103M6M9M12M15MMin: 17013224.04 / Avg: 17592139.66 / Max: 20690520.64Min: 14270832.24 / Avg: 14703175.32 / Max: 15008358.621. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsClear LinuxUbuntu 22.10900K1800K2700K3600K4500KSE +/- 51987.30, N = 4SE +/- 40661.15, N = 154366568.314307014.94-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsClear LinuxUbuntu 22.10800K1600K2400K3200K4000KMin: 4312187.45 / Avg: 4366568.31 / Max: 4522503.92Min: 4054364.88 / Avg: 4307014.94 / Max: 4459899.41. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CClear LinuxUbuntu 22.103K6K9K12K15KSE +/- 41.18, N = 3SE +/- 32.81, N = 315318.8315473.90-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CClear LinuxUbuntu 22.103K6K9K12K15KMin: 15262.23 / Avg: 15318.83 / Max: 15398.95Min: 15410.87 / Avg: 15473.9 / Max: 15521.221. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.05, N = 337.68
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Ubuntu 22.10816243240Min: 37.58 / Avg: 37.68 / Max: 37.77

Device: CPU - Batch Size: 64 - Model: ResNet-50

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Ubuntu 22.1040K80K120K160K200KSE +/- 624.40, N = 3192021.951. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Ubuntu 22.1030K60K90K120K150KMin: 191184.41 / Avg: 192021.95 / Max: 193242.921. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 1000

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 22.1040K80K120K160K200KSE +/- 492.12, N = 3203069.781. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 22.1040K80K120K160K200KMin: 202459.96 / Avg: 203069.78 / Max: 204043.751. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 500

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Ubuntu 22.1040K80K120K160K200KSE +/- 636.13, N = 3205841.241. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Ubuntu 22.1040K80K120K160K200KMin: 205200.2 / Avg: 205841.24 / Max: 207113.51. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 200

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Ubuntu 22.1040K80K120K160K200KSE +/- 1164.13, N = 3204910.461. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Ubuntu 22.1040K80K120K160K200KMin: 203534.3 / Avg: 204910.46 / Max: 207224.961. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 100

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SClear LinuxUbuntu 22.10160K320K480K640K800KSE +/- 13473.99, N = 15SE +/- 9180.18, N = 3706620765587-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SClear LinuxUbuntu 22.10130K260K390K520K650KMin: 663220 / Avg: 706620 / Max: 785290Min: 754990 / Avg: 765586.67 / Max: 7838701. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexClear LinuxUbuntu 22.104M8M12M16M20MSE +/- 180807.15, N = 3SE +/- 110264.57, N = 1517946052.0416748823.88-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexClear LinuxUbuntu 22.103M6M9M12M15MMin: 17594949.43 / Avg: 17946052.04 / Max: 18196562.61Min: 16548809.98 / Avg: 16748823.88 / Max: 18111916.341. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoClear LinuxUbuntu 22.1011K22K33K44K55KSE +/- 228.79, N = 3SE +/- 294.46, N = 1551626.5942378.79-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoClear LinuxUbuntu 22.109K18K27K36K45KMin: 51180.93 / Avg: 51626.59 / Max: 51939.32Min: 41902.77 / Avg: 42378.79 / Max: 46409.361. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPClear LinuxUbuntu 22.105K10K15K20K25KSE +/- 825.46, N = 15SE +/- 17.57, N = 322344.2619315.831. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPClear LinuxUbuntu 22.104K8K12K16K20KMin: 18546.49 / Avg: 22344.26 / Max: 25633.75Min: 19280.92 / Avg: 19315.83 / Max: 19336.741. (CXX) g++ options: -O3 -march=native -fopenmp

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDClear LinuxUbuntu 22.1020406080100SE +/- 0.22, N = 3SE +/- 0.14, N = 381.3284.651. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDClear LinuxUbuntu 22.101632486480Min: 80.93 / Avg: 81.32 / Max: 81.68Min: 84.37 / Avg: 84.65 / Max: 84.831. (CXX) g++ options: -O2 -lOpenCL

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Clear LinuxUbuntu 22.1048121620SE +/- 0.05, N = 3SE +/- 0.02, N = 316.0613.25-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Clear LinuxUbuntu 22.1048121620Min: 15.97 / Avg: 16.06 / Max: 16.11Min: 13.22 / Avg: 13.25 / Max: 13.291. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Clear LinuxUbuntu 22.1048121620SE +/- 0.02, N = 3SE +/- 0.01, N = 316.3113.58-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Clear LinuxUbuntu 22.1048121620Min: 16.28 / Avg: 16.31 / Max: 16.35Min: 13.56 / Avg: 13.58 / Max: 13.591. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Clear LinuxUbuntu 22.102004006008001000SE +/- 1.53, N = 3SE +/- 1.51, N = 3953.3965.8
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Clear LinuxUbuntu 22.102004006008001000Min: 950.8 / Avg: 953.33 / Max: 956.1Min: 964 / Avg: 965.8 / Max: 968.8

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.10246810SE +/- 0.05951, N = 8SE +/- 0.10591, N = 156.808367.63132-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 2.47MIN: 2.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.103691215Min: 6.53 / Avg: 6.81 / Max: 6.98Min: 7.05 / Avg: 7.63 / Max: 8.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 3840 x 2160Ubuntu 22.102004006008001000SE +/- 5.47, N = 3951.3

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinClear LinuxUbuntu 22.1030K60K90K120K150KSE +/- 1148.26, N = 12SE +/- 1874.06, N = 4160181156343-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinClear LinuxUbuntu 22.1030K60K90K120K150KMin: 150350 / Avg: 160180.83 / Max: 162500Min: 150850 / Avg: 156342.5 / Max: 1591401. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 3.09, N = 3967.89
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000Min: 962.31 / Avg: 967.89 / Max: 972.97

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.04, N = 312.05
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.1048121620Min: 11.99 / Avg: 12.05 / Max: 12.12

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 6.80, N = 3964.44
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000Min: 951.99 / Avg: 964.44 / Max: 975.4

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.14, N = 312.19
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.1048121620Min: 11.99 / Avg: 12.19 / Max: 12.47

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.105001000150020002500SE +/- 13.49, N = 3SE +/- 21.59, N = 32065.812150.83-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 1923.57MIN: 1989.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.10400800120016002000Min: 2047.13 / Avg: 2065.81 / Max: 2092.01Min: 2111.44 / Avg: 2150.83 / Max: 2185.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyClear LinuxUbuntu 22.1020406080100SE +/- 0.06, N = 3SE +/- 0.11, N = 375.3375.32
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyClear LinuxUbuntu 22.101428425670Min: 75.25 / Avg: 75.33 / Max: 75.44Min: 75.14 / Avg: 75.32 / Max: 75.51

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.46, N = 3227.14
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200Min: 226.41 / Avg: 227.14 / Max: 228

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.101224364860SE +/- 0.07, N = 352.54
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.101122334455Min: 52.42 / Avg: 52.54 / Max: 52.68

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.102004006008001000SE +/- 15.77, N = 3SE +/- 1.98, N = 31132.541112.82-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 1006.43MIN: 1021.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.102004006008001000Min: 1101.6 / Avg: 1132.54 / Max: 1153.26Min: 1110.74 / Avg: 1112.82 / Max: 1116.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1010002000300040005000SE +/- 6.89, N = 348311. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.108001600240032004000Min: 4821 / Avg: 4830.67 / Max: 48441. (CXX) g++ options: -O3 -lm -ldl

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUClear LinuxUbuntu 22.106K12K18K24K30KSE +/- 103.05, N = 3SE +/- 190.70, N = 32858728734
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUClear LinuxUbuntu 22.105K10K15K20K25KMin: 28401 / Avg: 28586.67 / Max: 28757Min: 28501 / Avg: 28734 / Max: 29112

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Clear LinuxUbuntu 22.1048121620SE +/- 0.05, N = 3SE +/- 0.01, N = 315.9213.09-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Clear LinuxUbuntu 22.1048121620Min: 15.82 / Avg: 15.92 / Max: 15.98Min: 13.06 / Avg: 13.09 / Max: 13.11. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Clear LinuxUbuntu 22.1048121620SE +/- 0.01, N = 3SE +/- 0.01, N = 316.1813.43-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Clear LinuxUbuntu 22.1048121620Min: 16.16 / Avg: 16.18 / Max: 16.2Min: 13.41 / Avg: 13.43 / Max: 13.451. (CXX) g++ options: -O3 -fno-rtti -funwind-tables -O2 -fPIE -pie -lm -latomic

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.105001000150020002500SE +/- 6.08, N = 32222.92MIN: 1682.82 / MAX: 2975.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.100.80551.6112.41653.2224.0275SE +/- 0.01, N = 33.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.105001000150020002500SE +/- 5.80, N = 32238.02MIN: 1692.38 / MAX: 2991.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.100.79881.59762.39643.19523.994SE +/- 0.02, N = 33.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.1030060090012001500SE +/- 3.93, N = 31570.46MIN: 1396.06 / MAX: 1856.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.101.1432.2863.4294.5725.715SE +/- 0.01, N = 35.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltimateUbuntu 22.10110220330440550SE +/- 1.29, N = 3527.86MIN: 98 / MAX: 1077

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSClear LinuxUbuntu 22.10400800120016002000SE +/- 15.22, N = 3SE +/- 6.70, N = 31885.42026.4MIN: 1818.73 / MAX: 2024.91MIN: 1949.41 / MAX: 2109.84
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSClear LinuxUbuntu 22.10400800120016002000Min: 1855.69 / Avg: 1885.37 / Max: 1906.04Min: 2014.4 / Avg: 2026.4 / Max: 2037.58

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomClear LinuxUbuntu 22.101.19882.39763.59644.79525.994SE +/- 0.057, N = 3SE +/- 0.040, N = 35.3284.051
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomClear LinuxUbuntu 22.10246810Min: 5.26 / Avg: 5.33 / Max: 5.44Min: 3.99 / Avg: 4.05 / Max: 4.12

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltimateUbuntu 22.10120240360480600SE +/- 0.59, N = 3540.60MIN: 101 / MAX: 1094

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsClear LinuxUbuntu 22.10400800120016002000SE +/- 14.31, N = 13SE +/- 22.11, N = 31777.51992.6MIN: 1478.87 / MAX: 2208.28MIN: 1796.31 / MAX: 2233.14
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsClear LinuxUbuntu 22.1030060090012001500Min: 1610.23 / Avg: 1777.51 / Max: 1807.2Min: 1950.53 / Avg: 1992.64 / Max: 2025.42

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterClear LinuxUbuntu 22.102040608010095.6690.78

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MUbuntu 22.104K8K12K16K20KSE +/- 35.72, N = 316463.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MUbuntu 22.103K6K9K12K15KMin: 16394 / Avg: 16463.23 / Max: 16513.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Variant: Wownero - Hash Count: 1M

Clear Linux: The test quit with a non-zero exit status. E: xmrig: line 3: ./xmrig: No such file or directory

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1090180270360450SE +/- 0.21, N = 3438.23MIN: 270.29 / MAX: 1085.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 318.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.10306090120150SE +/- 0.17, N = 3125.97MIN: 91.09 / MAX: 325.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.101428425670SE +/- 0.09, N = 363.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.103691215SE +/- 0.03, N = 310.96MIN: 7.62 / MAX: 52.61. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.10160320480640800SE +/- 2.34, N = 3728.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.101224364860SE +/- 0.05, N = 351.14MIN: 22.47 / MAX: 182.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.10100200300400500SE +/- 0.50, N = 3468.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.10246810SE +/- 0.01, N = 38.72MIN: 5.93 / MAX: 54.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.102004006008001000SE +/- 0.85, N = 3916.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.10510152025SE +/- 0.11, N = 321.43MIN: 12.24 / MAX: 94.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.1080160240320400SE +/- 1.93, N = 3372.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 314.63MIN: 6.68 / MAX: 120.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.10400800120016002000SE +/- 1.15, N = 31638.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.100.1620.3240.4860.6480.81SE +/- 0.00, N = 30.72MIN: 0.42 / MAX: 4.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.107K14K21K28K35KSE +/- 40.02, N = 333018.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.100.3690.7381.1071.4761.845SE +/- 0.00, N = 31.64MIN: 0.87 / MAX: 9.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.103K6K9K12K15KSE +/- 10.91, N = 314593.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathClear LinuxUbuntu 22.1030K60K90K120K150KSE +/- 111.06, N = 3SE +/- 966.54, N = 9118329.56119832.03-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathClear LinuxUbuntu 22.1020K40K60K80K100KMin: 118115.14 / Avg: 118329.56 / Max: 118487Min: 117846.53 / Avg: 119832.03 / Max: 127260.11. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Clear LinuxUbuntu 22.1080K160K240K320K400KSE +/- 239.50, N = 3SE +/- 142.92, N = 3358535.2358806.7-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Clear LinuxUbuntu 22.1060K120K180K240K300KMin: 358267.9 / Avg: 358535.23 / Max: 359013.1Min: 358540.4 / Avg: 358806.7 / Max: 359029.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Clear LinuxUbuntu 22.1012002400360048006000SE +/- 9.58, N = 3SE +/- 5.03, N = 35428.95496.9-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Clear LinuxUbuntu 22.1010002000300040005000Min: 5417.4 / Avg: 5428.87 / Max: 5447.9Min: 5489.8 / Avg: 5496.87 / Max: 5506.61. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CClear LinuxUbuntu 22.1011K22K33K44K55KSE +/- 575.82, N = 3SE +/- 33.54, N = 348313.6949771.29-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CClear LinuxUbuntu 22.109K18K27K36K45KMin: 47174.14 / Avg: 48313.69 / Max: 49027.64Min: 49707.27 / Avg: 49771.29 / Max: 49820.631. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedClear LinuxUbuntu 22.104080120160200177.80171.50

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CClear LinuxUbuntu 22.105K10K15K20K25KSE +/- 264.38, N = 3SE +/- 373.18, N = 1521228.7522382.81-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CClear LinuxUbuntu 22.104K8K12K16K20KMin: 20809.7 / Avg: 21228.75 / Max: 21717.55Min: 21381.59 / Avg: 22382.81 / Max: 26281.461. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialClear LinuxUbuntu 22.102040608010083.8284.86

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingClear LinuxUbuntu 22.1030K60K90K120K150KSE +/- 1174.47, N = 11SE +/- 1890.40, N = 31319941399811. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingClear LinuxUbuntu 22.1020K40K60K80K100KMin: 128472 / Avg: 131993.73 / Max: 139952Min: 137335 / Avg: 139981.33 / Max: 1436431. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingClear LinuxUbuntu 22.1040K80K120K160K200KSE +/- 1314.02, N = 11SE +/- 1518.09, N = 31818051821531. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingClear LinuxUbuntu 22.1030K60K90K120K150KMin: 173393 / Avg: 181804.64 / Max: 185691Min: 179141 / Avg: 182152.67 / Max: 1839921. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.32, N = 3256.55
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetUbuntu 22.1050100150200250Min: 256.1 / Avg: 256.55 / Max: 257.18

Device: CPU - Batch Size: 256 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KClear LinuxUbuntu 22.100.69051.3812.07152.7623.4525SE +/- 0.009, N = 3SE +/- 0.026, N = 33.0693.004-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KClear LinuxUbuntu 22.10246810Min: 3.05 / Avg: 3.07 / Max: 3.08Min: 2.97 / Avg: 3 / Max: 3.051. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Clear LinuxUbuntu 22.100.12830.25660.38490.51320.6415SE +/- 0.00, N = 3SE +/- 0.00, N = 30.550.57
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Clear LinuxUbuntu 22.10246810Min: 0.55 / Avg: 0.55 / Max: 0.56Min: 0.56 / Avg: 0.57 / Max: 0.57

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.100.44760.89521.34281.79042.238SE +/- 0.01965, N = 6SE +/- 0.01885, N = 151.989411.90430-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 1.57MIN: 1.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.10246810Min: 1.89 / Avg: 1.99 / Max: 2.02Min: 1.69 / Avg: 1.9 / Max: 2.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOClear LinuxUbuntu 22.109001800270036004500SE +/- 16.40, N = 3SE +/- 48.71, N = 33631.04193.8MIN: 3609.72 / MAX: 5141.58MIN: 4126.1 / MAX: 6173.56
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOClear LinuxUbuntu 22.107001400210028003500Min: 3609.72 / Avg: 3630.96 / Max: 3663.23Min: 4126.1 / Avg: 4193.81 / Max: 4288.31

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyClear LinuxUbuntu 22.101224364860SE +/- 0.31, N = 3SE +/- 0.16, N = 351.5851.61
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyClear LinuxUbuntu 22.101020304050Min: 51.03 / Avg: 51.58 / Max: 52.11Min: 51.39 / Avg: 51.61 / Max: 51.91

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 2.13, N = 3692.06MIN: 411 / MAX: 1142

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 1.76, N = 3696.93MIN: 375 / MAX: 1188

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteClear LinuxUbuntu 22.101122334455SE +/- 0.10, N = 3SE +/- 0.10, N = 348.3749.981. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteClear LinuxUbuntu 22.101020304050Min: 48.21 / Avg: 48.37 / Max: 48.56Min: 49.86 / Avg: 49.98 / Max: 50.171. (CXX) g++ options: -O2 -lOpenCL

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Ubuntu 22.1071421283529.3

Benchmark: channel2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Clear LinuxUbuntu 22.1048121620SE +/- 0.00, N = 3SE +/- 0.00, N = 316.6316.581. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Clear LinuxUbuntu 22.1048121620Min: 16.63 / Avg: 16.63 / Max: 16.63Min: 16.58 / Avg: 16.58 / Max: 16.591. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Clear LinuxUbuntu 22.1090180270360450SE +/- 0.04, N = 3SE +/- 0.12, N = 3415.68414.591. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Clear LinuxUbuntu 22.1070140210280350Min: 415.64 / Avg: 415.68 / Max: 415.75Min: 414.45 / Avg: 414.59 / Max: 414.821. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.08, N = 338.51
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Ubuntu 22.10816243240Min: 38.38 / Avg: 38.51 / Max: 38.65

Device: CPU - Batch Size: 32 - Model: ResNet-50

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DClear LinuxUbuntu 22.107001400210028003500SE +/- 4.83, N = 3SE +/- 10.67, N = 32906.943049.53-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DClear LinuxUbuntu 22.105001000150020002500Min: 2897.75 / Avg: 2906.94 / Max: 2914.13Min: 3037.56 / Avg: 3049.53 / Max: 3070.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.71, N = 3236.60
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200Min: 235.72 / Avg: 236.6 / Max: 238.02

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.101122334455SE +/- 0.20, N = 350.40
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.101020304050Min: 50.05 / Avg: 50.4 / Max: 50.74

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.05
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100Min: 89.73 / Avg: 90.05 / Max: 90.45

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.10
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.103691215Min: 11.06 / Avg: 11.1 / Max: 11.14

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.48
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100Min: 90.25 / Avg: 90.48 / Max: 90.91

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.05
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.103691215Min: 11 / Avg: 11.05 / Max: 11.08

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressClear LinuxUbuntu 22.1011K22K33K44K55KSE +/- 506.19, N = 6SE +/- 538.64, N = 351871.9151634.54-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressClear LinuxUbuntu 22.109K18K27K36K45KMin: 50470.92 / Avg: 51871.91 / Max: 54115.62Min: 50935.47 / Avg: 51634.54 / Max: 52693.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10612182430SE +/- 0.09, N = 324.80
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10612182430Min: 24.64 / Avg: 24.8 / Max: 24.94

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10918273645SE +/- 0.14, N = 340.32
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10816243240Min: 40.1 / Avg: 40.32 / Max: 40.58

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.26, N = 3115.35
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100Min: 114.85 / Avg: 115.35 / Max: 115.7

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 3103.90
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100Min: 103.65 / Avg: 103.9 / Max: 104.36

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkClear LinuxUbuntu 22.10612182430SE +/- 0.04, N = 3SE +/- 0.17, N = 326.9026.33
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkClear LinuxUbuntu 22.10612182430Min: 26.84 / Avg: 26.9 / Max: 26.99Min: 26.03 / Avg: 26.33 / Max: 26.61

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsClear LinuxUbuntu 22.100.13980.27960.41940.55920.699SE +/- 0.00153, N = 3SE +/- 0.00121, N = 30.621530.60982
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsClear LinuxUbuntu 22.10246810Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.61 / Avg: 0.61 / Max: 0.61

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedClear LinuxUbuntu 22.1011002200330044005500SE +/- 2.86, N = 3SE +/- 0.18, N = 35127.14758.31. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedClear LinuxUbuntu 22.109001800270036004500Min: 5122.3 / Avg: 5127.13 / Max: 5132.2Min: 4758 / Avg: 4758.33 / Max: 4758.61. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedClear LinuxUbuntu 22.1020406080100SE +/- 0.52, N = 3SE +/- 1.02, N = 385.580.51. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedClear LinuxUbuntu 22.101632486480Min: 84.5 / Avg: 85.53 / Max: 86.1Min: 79.1 / Avg: 80.53 / Max: 82.51. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designUbuntu 22.1061218243025.77

Benchmark: mp_prop_design

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigClear LinuxUbuntu 22.101020304050SE +/- 0.40, N = 3SE +/- 0.39, N = 342.7041.40
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigClear LinuxUbuntu 22.10918273645Min: 42.1 / Avg: 42.7 / Max: 43.45Min: 40.95 / Avg: 41.4 / Max: 42.18

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DClear LinuxUbuntu 22.1030060090012001500SE +/- 16.11, N = 4SE +/- 15.74, N = 41304.961263.16-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DClear LinuxUbuntu 22.102004006008001000Min: 1269.67 / Avg: 1304.96 / Max: 1345.22Min: 1233.47 / Avg: 1263.16 / Max: 1306.151. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200SE +/- 0.49, N = 3159.94
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150Min: 159.07 / Avg: 159.94 / Max: 160.76

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 374.69
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.101428425670Min: 74.31 / Avg: 74.69 / Max: 75.11

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.23, N = 331.07
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835Min: 30.64 / Avg: 31.07 / Max: 31.44

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.24, N = 332.18
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835Min: 31.8 / Avg: 32.18 / Max: 32.63

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Ubuntu 22.101.08412.16823.25234.33645.4205SE +/- 0.002423, N = 34.8184251. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Ubuntu 22.10246810Min: 4.82 / Avg: 4.82 / Max: 4.821. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sample Rate: 96000 - Buffer Size: 512

Clear Linux: The test quit with a non-zero exit status. E: stargate: line 40: ./engine/stargate-engine: No such file or directory

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Ubuntu 22.101.09232.18463.27694.36925.4615SE +/- 0.001939, N = 34.8546951. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Ubuntu 22.10246810Min: 4.85 / Avg: 4.85 / Max: 4.861. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sample Rate: 96000 - Buffer Size: 1024

Clear Linux: The test quit with a non-zero exit status. E: stargate: line 40: ./engine/stargate-engine: No such file or directory

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CClear LinuxUbuntu 22.1011K22K33K44K55KSE +/- 537.24, N = 3SE +/- 218.90, N = 351989.8953311.89-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CClear LinuxUbuntu 22.109K18K27K36K45KMin: 51100.05 / Avg: 51989.89 / Max: 52956.38Min: 53060.04 / Avg: 53311.89 / Max: 53747.941. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveClear LinuxUbuntu 22.104080120160200SE +/- 0.25, N = 3SE +/- 0.41, N = 3193.28182.08-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveClear LinuxUbuntu 22.104080120160200Min: 192.89 / Avg: 193.28 / Max: 193.73Min: 181.4 / Avg: 182.08 / Max: 182.811. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveClear LinuxUbuntu 22.10714212835SE +/- 0.03, N = 3SE +/- 0.06, N = 326.1327.74-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveClear LinuxUbuntu 22.10612182430Min: 26.07 / Avg: 26.13 / Max: 26.18Min: 27.62 / Avg: 27.74 / Max: 27.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 312.49
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.1048121620Min: 12.48 / Avg: 12.49 / Max: 12.51

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 380.03
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.101530456075Min: 79.94 / Avg: 80.03 / Max: 80.12

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedClear LinuxUbuntu 22.1011002200330044005500SE +/- 2.45, N = 3SE +/- 9.98, N = 35235.94887.71. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedClear LinuxUbuntu 22.109001800270036004500Min: 5232.1 / Avg: 5235.93 / Max: 5240.5Min: 4869.3 / Avg: 4887.67 / Max: 4903.61. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedClear LinuxUbuntu 22.101326395265SE +/- 0.10, N = 3SE +/- 0.40, N = 356.450.91. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedClear LinuxUbuntu 22.101122334455Min: 56.3 / Avg: 56.4 / Max: 56.6Min: 50.5 / Avg: 50.9 / Max: 51.71. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 378.24
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.101530456075Min: 78.14 / Avg: 78.24 / Max: 78.31

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.21, N = 3153.13
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150Min: 152.82 / Avg: 153.13 / Max: 153.53

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.1048121620SE +/- 0.05, N = 317.59
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.1048121620Min: 17.52 / Avg: 17.59 / Max: 17.68

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.101326395265SE +/- 0.15, N = 356.83
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.101122334455Min: 56.53 / Avg: 56.83 / Max: 57.05

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 310.35
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.103691215Min: 10.33 / Avg: 10.35 / Max: 10.38

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.14, N = 396.59
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100Min: 96.31 / Avg: 96.59 / Max: 96.74

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.100.33590.67181.00771.34361.6795SE +/- 0.015557, N = 3SE +/- 0.090916, N = 151.0844601.492836-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 0.83MIN: 0.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.10246810Min: 1.06 / Avg: 1.08 / Max: 1.11Min: 0.99 / Avg: 1.49 / Max: 2.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeUbuntu 22.100.44780.89561.34341.79122.239SE +/- 0.01, N = 31.99
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeUbuntu 22.10246810Min: 1.97 / Avg: 1.99 / Max: 2.01

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeUbuntu 22.100.17330.34660.51990.69320.8665SE +/- 0.02, N = 30.77
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeUbuntu 22.10246810Min: 0.72 / Avg: 0.77 / Max: 0.8

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeUbuntu 22.100.20930.41860.62790.83721.0465SE +/- 0.03, N = 30.93
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeUbuntu 22.10246810Min: 0.86 / Avg: 0.93 / Max: 0.97

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeUbuntu 22.100.22950.4590.68850.9181.1475SE +/- 0.01, N = 31.02
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeUbuntu 22.10246810Min: 1.01 / Avg: 1.02 / Max: 1.04

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.77631.55262.32893.10523.8815SE +/- 0.03, N = 33.45
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeUbuntu 22.10246810Min: 3.41 / Avg: 3.45 / Max: 3.5

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeUbuntu 22.100.59181.18361.77542.36722.959SE +/- 0.02, N = 32.63
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeUbuntu 22.10246810Min: 2.6 / Avg: 2.63 / Max: 2.68

Row Count: 1000000 - Partitions: 100 - Group By Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.18, N = 351.76
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkUbuntu 22.101020304050Min: 51.42 / Avg: 51.76 / Max: 52.01

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Ubuntu 22.1050K100K150K200K250KSE +/- 133.33, N = 32529331. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Ubuntu 22.1040K80K120K160K200KMin: 252800 / Avg: 252933.33 / Max: 2532001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Test: Square Plain C++

Clear Linux: The test quit with a non-zero exit status. E: chia-vdf: line 3: ./src/vdf_bench: No such file or directory

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingClear LinuxUbuntu 22.102K4K6K8K10KSE +/- 105.79, N = 4SE +/- 10.70, N = 39212.067385.39-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingClear LinuxUbuntu 22.1016003200480064008000Min: 9064.19 / Avg: 9212.06 / Max: 9524.33Min: 7373.14 / Avg: 7385.39 / Max: 7406.711. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedUbuntu 22.1060K120K180K240K300KSE +/- 218.58, N = 32690671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedUbuntu 22.1050K100K150K200K250KMin: 268800 / Avg: 269066.67 / Max: 2695001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Test: Square Assembly Optimized

Clear Linux: The test quit with a non-zero exit status. E: chia-vdf: line 3: ./src/vdf_bench: No such file or directory

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.22, N = 3111.20
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetUbuntu 22.1020406080100Min: 110.79 / Avg: 111.2 / Max: 111.55

Device: CPU - Batch Size: 64 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeClear LinuxUbuntu 22.10816243240SE +/- 0.09, N = 3SE +/- 0.11, N = 331.1232.351. Clear Linux: RawTherapee, version , command line.2. Ubuntu 22.10: RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeClear LinuxUbuntu 22.10714212835Min: 31.02 / Avg: 31.12 / Max: 31.3Min: 32.23 / Avg: 32.35 / Max: 32.571. Clear Linux: RawTherapee, version , command line.2. Ubuntu 22.10: RawTherapee, version 5.8, command line.

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateClear LinuxUbuntu 22.10510152025SE +/- 0.03, N = 3SE +/- 0.06, N = 317.322.2
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateClear LinuxUbuntu 22.10510152025Min: 17.2 / Avg: 17.27 / Max: 17.3Min: 22.1 / Avg: 22.2 / Max: 22.3

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1080Clear LinuxUbuntu 22.102004006008001000SE +/- 0.97, N = 3SE +/- 0.56, N = 3998.32999.44
OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1080Clear LinuxUbuntu 22.102004006008001000Min: 996.64 / Avg: 998.32 / Max: 1000Min: 998.33 / Avg: 999.44 / Max: 1000

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingClear LinuxUbuntu 22.100.00090.00180.00270.00360.0045SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0040.004
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingClear LinuxUbuntu 22.1012345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0.01

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 2160Clear LinuxUbuntu 22.102004006008001000SE +/- 3.78, N = 3SE +/- 3.57, N = 3909.87896.02
OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 2160Clear LinuxUbuntu 22.10160320480640800Min: 904.68 / Avg: 909.87 / Max: 917.22Min: 891.3 / Avg: 896.02 / Max: 903.01

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Clear LinuxUbuntu 22.10816243240SE +/- 0.02, N = 3SE +/- 0.03, N = 329.7632.81-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Clear LinuxUbuntu 22.10714212835Min: 29.73 / Avg: 29.76 / Max: 29.8Min: 32.77 / Avg: 32.81 / Max: 32.871. (CC) gcc options: -lz

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandClear LinuxUbuntu 22.1020K40K60K80K100KSE +/- 4.13, N = 3SE +/- 9.58, N = 382742.7982767.76-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandClear LinuxUbuntu 22.1014K28K42K56K70KMin: 82737.28 / Avg: 82742.79 / Max: 82750.87Min: 82753.9 / Avg: 82767.76 / Max: 82786.151. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Ubuntu 22.104812162013.99

Benchmark: test_fpu2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingClear LinuxUbuntu 22.108M16M24M32M40MSE +/- 11801.23, N = 3SE +/- 181986.71, N = 338506364.4213432520.51-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingClear LinuxUbuntu 22.107M14M21M28M35MMin: 38484180.99 / Avg: 38506364.42 / Max: 38524436.73Min: 13247041.77 / Avg: 13432520.51 / Max: 13796471.441. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xClear LinuxUbuntu 22.102004006008001000SE +/- 0.85, N = 3SE +/- 4.19, N = 31138.851128.00-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xClear LinuxUbuntu 22.102004006008001000Min: 1137.48 / Avg: 1138.85 / Max: 1140.41Min: 1121.76 / Avg: 1128 / Max: 1135.961. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinClear LinuxUbuntu 22.107001400210028003500SE +/- 41.78, N = 3SE +/- 37.34, N = 33275.843496.70-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinClear LinuxUbuntu 22.106001200180024003000Min: 3234.02 / Avg: 3275.84 / Max: 3359.41Min: 3437.06 / Avg: 3496.7 / Max: 3565.441. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingClear LinuxUbuntu 22.1020K40K60K80K100KSE +/- 1444.59, N = 3SE +/- 721.90, N = 3104947.27113514.43-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingClear LinuxUbuntu 22.1020K40K60K80K100KMin: 103307.95 / Avg: 104947.27 / Max: 107827.26Min: 112319.45 / Avg: 113514.43 / Max: 114813.671. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptClear LinuxUbuntu 22.1070140210280350SE +/- 0.53, N = 3SE +/- 3.47, N = 3337.82333.58-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptClear LinuxUbuntu 22.1060120180240300Min: 336.83 / Avg: 337.82 / Max: 338.63Min: 328.57 / Avg: 333.58 / Max: 340.241. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPClear LinuxUbuntu 22.102004006008001000SE +/- 2.00, N = 3SE +/- 1.45, N = 3798.15742.41-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPClear LinuxUbuntu 22.10140280420560700Min: 794.23 / Avg: 798.15 / Max: 800.78Min: 739.64 / Avg: 742.41 / Max: 744.511. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingClear LinuxUbuntu 22.10100200300400500SE +/- 2.71, N = 3SE +/- 0.68, N = 3457.79411.39-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingClear LinuxUbuntu 22.1080160240320400Min: 454.7 / Avg: 457.79 / Max: 463.19Min: 410.43 / Avg: 411.39 / Max: 412.71. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringClear LinuxUbuntu 22.106K12K18K24K30KSE +/- 26.25, N = 3SE +/- 56.03, N = 327808.5327676.33-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringClear LinuxUbuntu 22.105K10K15K20K25KMin: 27781.76 / Avg: 27808.53 / Max: 27861.03Min: 27597.48 / Avg: 27676.33 / Max: 27784.711. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAClear LinuxUbuntu 22.10150300450600750SE +/- 2.32, N = 3SE +/- 1.84, N = 3706.16681.80-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAClear LinuxUbuntu 22.10120240360480600Min: 702.46 / Avg: 706.16 / Max: 710.43Min: 679.38 / Avg: 681.8 / Max: 685.41. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocClear LinuxUbuntu 22.1010M20M30M40M50MSE +/- 102140.32, N = 3SE +/- 149024.22, N = 347064332.5136241645.59-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocClear LinuxUbuntu 22.108M16M24M32M40MMin: 46893674.74 / Avg: 47064332.51 / Max: 47246897.61Min: 35957154 / Avg: 36241645.59 / Max: 36460852.781. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEClear LinuxUbuntu 22.10130K260K390K520K650KSE +/- 8502.15, N = 3SE +/- 3074.02, N = 3595183.17588014.94-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEClear LinuxUbuntu 22.10100K200K300K400K500KMin: 583000.57 / Avg: 595183.17 / Max: 611548.13Min: 583907.59 / Avg: 588014.94 / Max: 594030.431. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDClear LinuxUbuntu 22.105001000150020002500SE +/- 0.73, N = 3SE +/- 18.36, N = 32343.512049.37-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDClear LinuxUbuntu 22.10400800120016002000Min: 2342.73 / Avg: 2343.51 / Max: 2344.96Min: 2014.86 / Avg: 2049.37 / Max: 2077.511. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathClear LinuxUbuntu 22.1020K40K60K80K100KSE +/- 1098.74, N = 3SE +/- 588.05, N = 3110071.38109789.42-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathClear LinuxUbuntu 22.1020K40K60K80K100KMin: 108607.94 / Avg: 110071.38 / Max: 112222.77Min: 108660.63 / Avg: 109789.42 / Max: 110639.81. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresClear LinuxUbuntu 22.10800K1600K2400K3200K4000KSE +/- 280.21, N = 3SE +/- 1451.53, N = 33426675.993538392.41-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor -lsctp1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresClear LinuxUbuntu 22.10600K1200K1800K2400K3000KMin: 3426203.65 / Avg: 3426675.99 / Max: 3427173.35Min: 3536780.97 / Avg: 3538392.41 / Max: 3541289.371. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeUbuntu 22.1061218243027.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Input: drivaerFastback, Small Mesh Size - Mesh Time

Clear Linux: The test quit with a non-zero exit status. E: cat: log.simpleFoam: No such file or directory

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeUbuntu 22.10306090120150151.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Input: drivaerFastback, Small Mesh Size - Execution Time

Clear Linux: The test quit with a non-zero exit status. E: cat: log.simpleFoam: No such file or directory

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinClear LinuxUbuntu 22.104K8K12K16K20KSE +/- 96.09, N = 3SE +/- 26.46, N = 31887018520-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinClear LinuxUbuntu 22.103K6K9K12K15KMin: 18750 / Avg: 18870 / Max: 19060Min: 18470 / Avg: 18520 / Max: 185601. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiClear LinuxUbuntu 22.1030060090012001500SE +/- 7.74, N = 3SE +/- 4.42, N = 31061.901176.02-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiClear LinuxUbuntu 22.102004006008001000Min: 1052.08 / Avg: 1061.9 / Max: 1077.18Min: 1169.91 / Avg: 1176.02 / Max: 1184.61. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsClear LinuxUbuntu 22.1012K24K36K48K60KSE +/- 40.96, N = 3SE +/- 141.89, N = 35389752550-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsClear LinuxUbuntu 22.109K18K27K36K45KMin: 53820 / Avg: 53896.67 / Max: 53960Min: 52270 / Avg: 52550 / Max: 527301. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteClear LinuxUbuntu 22.1040K80K120K160K200KSE +/- 170.33, N = 3SE +/- 120.14, N = 3200287198680-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteClear LinuxUbuntu 22.1030K60K90K120K150KMin: 199990 / Avg: 200286.67 / Max: 200580Min: 198440 / Avg: 198680 / Max: 1988101. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinClear LinuxUbuntu 22.1090K180K270K360K450KSE +/- 486.39, N = 3SE +/- 3729.11, N = 3436663434460-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinClear LinuxUbuntu 22.1080K160K240K320K400KMin: 435750 / Avg: 436663.33 / Max: 437410Min: 427330 / Avg: 434460 / Max: 4399201. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinClear LinuxUbuntu 22.1012002400360048006000SE +/- 20.28, N = 3SE +/- 21.91, N = 35020.195423.94-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinClear LinuxUbuntu 22.109001800270036004500Min: 4986.6 / Avg: 5020.19 / Max: 5056.66Min: 5380.23 / Avg: 5423.94 / Max: 5448.351. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceClear LinuxUbuntu 22.1050100150200250SE +/- 0.00, N = 3SE +/- 0.88, N = 3130208
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceClear LinuxUbuntu 22.104080120160200Min: 130 / Avg: 130 / Max: 130Min: 207 / Avg: 208.33 / Max: 210

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Ubuntu 22.10246810SE +/- 0.006537, N = 36.1453531. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Ubuntu 22.10246810Min: 6.13 / Avg: 6.15 / Max: 6.151. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sample Rate: 44100 - Buffer Size: 512

Clear Linux: The test quit with a non-zero exit status. E: stargate: line 40: ./engine/stargate-engine: No such file or directory

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileClear LinuxUbuntu 22.10714212835SE +/- 0.32, N = 3SE +/- 0.20, N = 326.9930.251. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileClear LinuxUbuntu 22.10714212835Min: 26.38 / Avg: 26.99 / Max: 27.46Min: 29.85 / Avg: 30.24 / Max: 30.531. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.104080120160200SE +/- 0.71, N = 3199.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.104080120160200Min: 198.3 / Avg: 199.2 / Max: 200.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.10150300450600750SE +/- 5.48, N = 3677.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.10120240360480600Min: 669.3 / Avg: 677.5 / Max: 687.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Ubuntu 22.10246810SE +/- 0.011125, N = 36.4228351. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Ubuntu 22.103691215Min: 6.4 / Avg: 6.42 / Max: 6.441. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sample Rate: 44100 - Buffer Size: 1024

Clear Linux: The test quit with a non-zero exit status. E: stargate: line 40: ./engine/stargate-engine: No such file or directory

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatClear LinuxUbuntu 22.101122334455SE +/- 0.00, N = 3SE +/- 0.10, N = 332.446.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatClear LinuxUbuntu 22.101020304050Min: 32.4 / Avg: 32.4 / Max: 32.4Min: 46.6 / Avg: 46.7 / Max: 46.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosClear LinuxUbuntu 22.101020304050SE +/- 0.06, N = 3SE +/- 0.13, N = 332.144.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosClear LinuxUbuntu 22.10918273645Min: 32 / Avg: 32.1 / Max: 32.2Min: 44.6 / Avg: 44.87 / Max: 45

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileClear LinuxUbuntu 22.1020406080100SE +/- 0.09, N = 3SE +/- 0.30, N = 363.582.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileClear LinuxUbuntu 22.101632486480Min: 63.4 / Avg: 63.53 / Max: 63.7Min: 81.7 / Avg: 82.13 / Max: 82.7

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveClear LinuxUbuntu 22.1080160240320400SE +/- 0.12, N = 3SE +/- 0.46, N = 3358.59353.24-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveClear LinuxUbuntu 22.1060120180240300Min: 358.42 / Avg: 358.59 / Max: 358.83Min: 352.76 / Avg: 353.24 / Max: 354.151. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveClear LinuxUbuntu 22.1048121620SE +/- 0.00, N = 3SE +/- 0.02, N = 314.0814.30-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveClear LinuxUbuntu 22.1048121620Min: 14.07 / Avg: 14.08 / Max: 14.09Min: 14.26 / Avg: 14.3 / Max: 14.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonClear LinuxUbuntu 22.104080120160200SE +/- 0.00, N = 3SE +/- 0.67, N = 3135197
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonClear LinuxUbuntu 22.104080120160200Min: 135 / Avg: 135 / Max: 135Min: 196 / Avg: 196.67 / Max: 198

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Clear LinuxUbuntu 22.10306090120150SE +/- 0.00, N = 3SE +/- 0.58, N = 3122158
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Clear LinuxUbuntu 22.10306090120150Min: 122 / Avg: 122 / Max: 122Min: 157 / Avg: 158 / Max: 159

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionClear LinuxUbuntu 22.100.21830.43660.65490.87321.0915SE +/- 0.00, N = 3SE +/- 0.00, N = 30.970.91-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionClear LinuxUbuntu 22.10246810Min: 0.97 / Avg: 0.97 / Max: 0.97Min: 0.91 / Avg: 0.91 / Max: 0.921. (CC) gcc options: -fvisibility=hidden -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyClear LinuxUbuntu 22.101428425670SE +/- 0.25, N = 3SE +/- 0.32, N = 341.561.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyClear LinuxUbuntu 22.101224364860Min: 41.2 / Avg: 41.5 / Max: 42Min: 60.9 / Avg: 61.43 / Max: 62

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Clear LinuxUbuntu 22.10400800120016002000SE +/- 26.90, N = 20SE +/- 34.76, N = 2014352091
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Clear LinuxUbuntu 22.10400800120016002000Min: 1181 / Avg: 1435.15 / Max: 1626Min: 1851 / Avg: 2091.35 / Max: 2505

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibClear LinuxUbuntu 22.10246810SE +/- 0.01, N = 3SE +/- 0.02, N = 37.358.77
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibClear LinuxUbuntu 22.103691215Min: 7.33 / Avg: 7.35 / Max: 7.36Min: 8.73 / Avg: 8.77 / Max: 8.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goClear LinuxUbuntu 22.10306090120150SE +/- 0.03, N = 3SE +/- 0.33, N = 367.6112.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goClear LinuxUbuntu 22.1020406080100Min: 67.5 / Avg: 67.57 / Max: 67.6Min: 112 / Avg: 112.33 / Max: 113

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverClear LinuxUbuntu 22.10246810SE +/- 0.052, N = 15SE +/- 0.053, N = 86.2276.1681. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverClear LinuxUbuntu 22.10246810Min: 5.96 / Avg: 6.23 / Max: 6.47Min: 6.09 / Avg: 6.17 / Max: 6.531. (CXX) g++ options: -O2 -lOpenCL

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.104080120160200SE +/- 0.40, N = 3182.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10306090120150Min: 181.2 / Avg: 182 / Max: 182.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10130260390520650SE +/- 2.64, N = 3624.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10110220330440550Min: 619.6 / Avg: 624.87 / Max: 627.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.32, N = 339.70
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 22.10816243240Min: 39.07 / Avg: 39.7 / Max: 40.04

Device: CPU - Batch Size: 16 - Model: ResNet-50

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterClear LinuxUbuntu 22.10246810SE +/- 0.094, N = 15SE +/- 0.015, N = 37.7327.2971. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterClear LinuxUbuntu 22.103691215Min: 7.33 / Avg: 7.73 / Max: 8.24Min: 7.27 / Avg: 7.3 / Max: 7.321. (CXX) g++ options: -O2 -lOpenCL

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestUbuntu 22.1040M80M120M160M200MSE +/- 360555.13, N = 31956000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestUbuntu 22.1030M60M90M120M150MMin: 194900000 / Avg: 195600000 / Max: 1961000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: OFDM_Test

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/src/phy/dft/test/ofdm_test: No such file or directory

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Ubuntu 22.10369121511.07

Benchmark: induct2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesClear LinuxUbuntu 22.10150300450600750SE +/- 3.29, N = 3SE +/- 1.16, N = 3660.9693.8MIN: 480.28 / MAX: 666.31MIN: 500.78 / MAX: 696.1
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesClear LinuxUbuntu 22.10120240360480600Min: 654.95 / Avg: 660.91 / Max: 666.31Min: 692.33 / Avg: 693.82 / Max: 696.1

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsUbuntu 22.10510152025SE +/- 0.24, N = 321.86
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsUbuntu 22.10510152025Min: 21.43 / Avg: 21.86 / Max: 22.27

Target Bandwidth: 24 kbps

Clear Linux: The test quit with a non-zero exit status. E: encodec: line 2: /.local/bin/encodec: No such file or directory

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesClear LinuxUbuntu 22.101122334455SE +/- 0.03, N = 3SE +/- 0.15, N = 335.450.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesClear LinuxUbuntu 22.101020304050Min: 35.3 / Avg: 35.37 / Max: 35.4Min: 50.5 / Avg: 50.7 / Max: 51

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgUbuntu 22.104K8K12K16K20KSE +/- 31.07, N = 320855
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgUbuntu 22.104K8K12K16K20KMin: 20795 / Avg: 20855 / Max: 20899

Model: en_core_web_lg

Clear Linux: The test quit with a non-zero exit status. E: ValueError: 'in' is not a valid parameter name

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfUbuntu 22.105001000150020002500SE +/- 23.73, N = 32523
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfUbuntu 22.10400800120016002000Min: 2484 / Avg: 2523.33 / Max: 2566

Model: en_core_web_trf

Clear Linux: The test quit with a non-zero exit status. E: ValueError: 'in' is not a valid parameter name

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsClear LinuxUbuntu 22.103691215SE +/- 0.00, N = 3SE +/- 0.00, N = 311.111.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsClear LinuxUbuntu 22.103691215Min: 11.1 / Avg: 11.1 / Max: 11.1Min: 11.5 / Avg: 11.5 / Max: 11.5

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Clear LinuxUbuntu 22.10200M400M600M800M1000MSE +/- 11825345.66, N = 3SE +/- 10982623.14, N = 31003120000859746667-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Clear LinuxUbuntu 22.10200M400M600M800M1000MMin: 984920000 / Avg: 1003120000 / Max: 1025300000Min: 840550000 / Avg: 859746666.67 / Max: 8785900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestClear LinuxUbuntu 22.1080160240320400SE +/- 3.01, N = 3SE +/- 0.53, N = 3357.8384.5MIN: 334.23 / MAX: 408.79MIN: 357.94 / MAX: 465.15
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestClear LinuxUbuntu 22.1070140210280350Min: 353.98 / Avg: 357.75 / Max: 363.71Min: 383.5 / Avg: 384.54 / Max: 385.26

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsUbuntu 22.10510152025SE +/- 0.23, N = 319.30
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsUbuntu 22.10510152025Min: 18.85 / Avg: 19.3 / Max: 19.6

Target Bandwidth: 6 kbps

Clear Linux: The test quit with a non-zero exit status. E: encodec: line 2: /.local/bin/encodec: No such file or directory

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsUbuntu 22.10510152025SE +/- 0.17, N = 319.18
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsUbuntu 22.10510152025Min: 18.84 / Avg: 19.18 / Max: 19.35

Target Bandwidth: 3 kbps

Clear Linux: The test quit with a non-zero exit status. E: encodec: line 2: /.local/bin/encodec: No such file or directory

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsUbuntu 22.10510152025SE +/- 0.18, N = 318.55
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsUbuntu 22.10510152025Min: 18.23 / Avg: 18.55 / Max: 18.86

Target Bandwidth: 1.5 kbps

Clear Linux: The test quit with a non-zero exit status. E: encodec: line 2: /.local/bin/encodec: No such file or directory

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KClear LinuxUbuntu 22.10306090120150SE +/- 1.15, N = 10SE +/- 3.19, N = 12146.71142.19-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KClear LinuxUbuntu 22.10306090120150Min: 136.68 / Avg: 146.71 / Max: 148.86Min: 107.49 / Avg: 142.19 / Max: 146.661. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 22.10400800120016002000SE +/- 26.27, N = 31957.9MIN: 1750.66 / MAX: 2219.24
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 22.1030060090012001500Min: 1926.33 / Avg: 1957.9 / Max: 2010.06

Test: In-Memory Database Shootout

Clear Linux: The test run did not produce a result.

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CClear LinuxUbuntu 22.102K4K6K8K10KSE +/- 4.34, N = 3SE +/- 24.89, N = 38580.508583.03-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CClear LinuxUbuntu 22.1015003000450060007500Min: 8573.7 / Avg: 8580.5 / Max: 8588.58Min: 8551.04 / Avg: 8583.03 / Max: 8632.061. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KClear LinuxUbuntu 22.104080120160200SE +/- 3.28, N = 15SE +/- 1.60, N = 15201.71202.12-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KClear LinuxUbuntu 22.104080120160200Min: 180.83 / Avg: 201.71 / Max: 212.77Min: 185.82 / Avg: 202.12 / Max: 206.41. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.44, N = 3113.16
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetUbuntu 22.1020406080100Min: 112.28 / Avg: 113.16 / Max: 113.61

Device: CPU - Batch Size: 32 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BClear LinuxUbuntu 22.105K10K15K20K25KSE +/- 64.59, N = 3SE +/- 227.34, N = 321769.3322786.60-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BClear LinuxUbuntu 22.104K8K12K16K20KMin: 21667.86 / Avg: 21769.33 / Max: 21889.3Min: 22367.68 / Avg: 22786.6 / Max: 23149.131. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowUbuntu 22.1036912159.54

Benchmark: rnflow

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: UltraClear LinuxUbuntu 22.10160320480640800SE +/- 0.37, N = 3SE +/- 0.21, N = 3722.6664.7
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: UltraClear LinuxUbuntu 22.10130260390520650Min: 722.1 / Avg: 722.57 / Max: 723.3Min: 664.3 / Avg: 664.7 / Max: 665

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducUbuntu 22.100.76051.5212.28153.0423.80253.38

Benchmark: doduc

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraClear LinuxUbuntu 22.10150300450600750SE +/- 3.41, N = 3SE +/- 1.58, N = 3713.0671.3
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraClear LinuxUbuntu 22.10130260390520650Min: 708.3 / Avg: 712.97 / Max: 719.6Min: 668.1 / Avg: 671.27 / Max: 672.9

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: HighClear LinuxUbuntu 22.10150300450600750SE +/- 3.33, N = 3SE +/- 6.27, N = 3713.9665.9
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: HighClear LinuxUbuntu 22.10130260390520650Min: 710 / Avg: 713.87 / Max: 720.5Min: 653.4 / Avg: 665.9 / Max: 673.1

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.42, N = 3235.17
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetUbuntu 22.104080120160200Min: 234.6 / Avg: 235.17 / Max: 235.99

Device: CPU - Batch Size: 64 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighClear LinuxUbuntu 22.10160320480640800SE +/- 1.07, N = 3SE +/- 2.63, N = 3731.8683.9
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighClear LinuxUbuntu 22.10130260390520650Min: 729.9 / Avg: 731.83 / Max: 733.6Min: 679.3 / Avg: 683.9 / Max: 688.4

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClear LinuxUbuntu 22.1020406080100SE +/- 0.64, N = 3SE +/- 0.39, N = 397.5670.32-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CXX) g++ options: -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClear LinuxUbuntu 22.1020406080100Min: 96.28 / Avg: 97.56 / Max: 98.22Min: 69.86 / Avg: 70.32 / Max: 71.11. (CXX) g++ options: -fopenmp -ljpeg -lz -lm

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Clear LinuxUbuntu 22.1012002400360048006000SE +/- 7.05, N = 3SE +/- 69.40, N = 35528.45198.7-lboost_timer -lboost_system -lboost_chrono1. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Clear LinuxUbuntu 22.1010002000300040005000Min: 5514.5 / Avg: 5528.4 / Max: 5537.4Min: 5059.9 / Avg: 5198.67 / Max: 5270.31. (CXX) g++ options: -O3 -march=native -rdynamic

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaUbuntu 22.101.15432.30863.46294.61725.77155.13

Benchmark: capacita

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KUbuntu 22.101020304050SE +/- 0.13, N = 344.871. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KUbuntu 22.10918273645Min: 44.63 / Avg: 44.87 / Max: 45.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

Clear Linux: The test run did not produce a result.

CloudSuite Graph Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 63.87, N = 39985
OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph AnalyticsUbuntu 22.102K4K6K8K10KMin: 9918 / Avg: 9985.33 / Max: 10113

Clear Linux: The test run did not produce a result.

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1020406080100SE +/- 0.24, N = 3107.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1020406080100Min: 107.1 / Avg: 107.57 / Max: 107.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_nr_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.24, N = 3224.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.104080120160200Min: 224 / Avg: 224.33 / Max: 224.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_nr_test: No such file or directory

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.1050100150200250SE +/- 2.07, N = 3242.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.104080120160200Min: 238.5 / Avg: 242.6 / Max: 245.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.10150300450600750SE +/- 8.14, N = 3683.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.10120240360480600Min: 667.5 / Avg: 683.73 / Max: 692.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinUbuntu 22.102468106.93

Benchmark: protein

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessClear LinuxUbuntu 22.100.56031.12061.68092.24122.8015SE +/- 0.00, N = 3SE +/- 0.00, N = 32.492.30-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessClear LinuxUbuntu 22.10246810Min: 2.49 / Avg: 2.49 / Max: 2.5Min: 2.29 / Avg: 2.3 / Max: 2.31. (CC) gcc options: -fvisibility=hidden -lm

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acUbuntu 22.100.84831.69662.54493.39324.24153.77

Benchmark: ac

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesClear LinuxUbuntu 22.10100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3401474
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesClear LinuxUbuntu 22.1080160240320400Min: 400 / Avg: 400.67 / Max: 401Min: 473 / Avg: 473.67 / Max: 474

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.12, N = 3233.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.104080120160200Min: 233.1 / Avg: 233.3 / Max: 233.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.10140280420560700SE +/- 1.34, N = 3633.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.10110220330440550Min: 631 / Avg: 633.1 / Max: 635.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airUbuntu 22.100.20930.41860.62790.83721.04650.93

Benchmark: air

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KUbuntu 22.101530456075SE +/- 0.50, N = 366.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KUbuntu 22.101326395265Min: 65.43 / Avg: 66.05 / Max: 67.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

Clear Linux: The test run did not produce a result.

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteClear LinuxUbuntu 22.10700K1400K2100K2800K3500KSE +/- 5678.93, N = 3SE +/- 2367.31, N = 333554231617596
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteClear LinuxUbuntu 22.10600K1200K1800K2400K3000KMin: 3344158 / Avg: 3355423.33 / Max: 3362309Min: 1612979 / Avg: 1617596.33 / Max: 1620812

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.100.91781.83562.75343.67124.589SE +/- 0.00599, N = 3SE +/- 0.02741, N = 33.353544.07923-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 3.22MIN: 4.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.10246810Min: 3.34 / Avg: 3.35 / Max: 3.36Min: 4.05 / Avg: 4.08 / Max: 4.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.43, N = 3206.94
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetUbuntu 22.104080120160200Min: 206.42 / Avg: 206.94 / Max: 207.8

Device: CPU - Batch Size: 32 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestClear LinuxUbuntu 22.105K10K15K20K25KSE +/- 196.47, N = 8SE +/- 43.21, N = 322735181471. Nodejs
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestClear LinuxUbuntu 22.104K8K12K16K20KMin: 21866 / Avg: 22735.13 / Max: 23438Min: 18068 / Avg: 18146.67 / Max: 182171. Nodejs

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateClear LinuxUbuntu 22.100.00020.00040.00060.00080.001SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0010.001
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateClear LinuxUbuntu 22.1012345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KClear LinuxUbuntu 22.1020406080100SE +/- 0.34, N = 3SE +/- 0.77, N = 376.3077.26-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KClear LinuxUbuntu 22.101530456075Min: 75.9 / Avg: 76.3 / Max: 76.98Min: 76.13 / Avg: 77.26 / Max: 78.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.12, N = 3117.39
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetUbuntu 22.1020406080100Min: 117.18 / Avg: 117.39 / Max: 117.59

Device: CPU - Batch Size: 16 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.15, N = 385.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KUbuntu 22.101632486480Min: 85.16 / Avg: 85.34 / Max: 85.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

Clear Linux: The test run did not produce a result.

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapUbuntu 22.10400800120016002000SE +/- 14.59, N = 71638
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapUbuntu 22.1030060090012001500Min: 1568 / Avg: 1637.86 / Max: 1686

Java Test: Tradesoap

Clear Linux: The test quit with a non-zero exit status. E: Caused by: java.lang.ExceptionInInitializerError: Exception java.lang.ExceptionInInitializerError [in thread "main"]

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.10, N = 386.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KUbuntu 22.101632486480Min: 86.33 / Avg: 86.52 / Max: 86.691. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

Clear Linux: The test run did not produce a result.

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CClear LinuxUbuntu 22.105K10K15K20K25KSE +/- 39.24, N = 3SE +/- 295.81, N = 324835.8724905.50-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CClear LinuxUbuntu 22.104K8K12K16K20KMin: 24765.9 / Avg: 24835.87 / Max: 24901.65Min: 24589.27 / Avg: 24905.5 / Max: 25496.641. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodUbuntu 22.100.62331.24661.86992.49323.11652.77

Benchmark: aermod

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

CloudSuite In-Memory Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 47.35, N = 310160
OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory AnalyticsUbuntu 22.102K4K6K8K10KMin: 10077 / Avg: 10159.67 / Max: 10241

Clear Linux: The test run did not produce a result.

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KClear LinuxUbuntu 22.1020406080100SE +/- 0.71, N = 3SE +/- 1.00, N = 3106.10105.39-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KClear LinuxUbuntu 22.1020406080100Min: 105.06 / Avg: 106.1 / Max: 107.45Min: 103.39 / Avg: 105.39 / Max: 106.51. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxUbuntu 22.100.67951.3592.03852.7183.39753.02

Benchmark: mdbx

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.101.29882.59763.89645.19526.494SE +/- 0.02366, N = 3SE +/- 0.00365, N = 35.753295.77228-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 5.46MIN: 5.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.10246810Min: 5.71 / Avg: 5.75 / Max: 5.79Min: 5.77 / Avg: 5.77 / Max: 5.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetUbuntu 22.104080120160200SE +/- 0.36, N = 3162.47
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetUbuntu 22.10306090120150Min: 161.81 / Avg: 162.47 / Max: 163.05

Device: CPU - Batch Size: 16 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KClear LinuxUbuntu 22.10306090120150SE +/- 1.33, N = 3SE +/- 0.42, N = 3121.67123.49-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KClear LinuxUbuntu 22.1020406080100Min: 120.23 / Avg: 121.67 / Max: 124.33Min: 122.7 / Avg: 123.49 / Max: 124.131. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

Scale: 26

Ubuntu 22.10: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node phoronix-System-Product-Name exited on signal 9 (Killed).

Clear Linux: The test quit with a non-zero exit status. E: AML: Fatal: non power2 groupsize unsupported. Define macro PROCS_PER_NODE_NOT_POWER_OF_TWO to override

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionClear LinuxUbuntu 22.101.16552.3313.49654.6625.8275SE +/- 0.00, N = 3SE +/- 0.01, N = 35.185.00-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionClear LinuxUbuntu 22.10246810Min: 5.18 / Avg: 5.18 / Max: 5.18Min: 4.98 / Avg: 5 / Max: 5.011. (CC) gcc options: -fvisibility=hidden -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KClear LinuxUbuntu 22.104080120160200SE +/- 0.20, N = 3SE +/- 0.68, N = 3158.73155.19-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KClear LinuxUbuntu 22.10306090120150Min: 158.38 / Avg: 158.73 / Max: 159.08Min: 153.85 / Avg: 155.19 / Max: 156.111. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KClear LinuxUbuntu 22.10306090120150SE +/- 0.71, N = 3SE +/- 1.70, N = 3149.92147.87-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KClear LinuxUbuntu 22.10306090120150Min: 148.91 / Avg: 149.92 / Max: 151.29Min: 144.99 / Avg: 147.87 / Max: 150.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkUbuntu 22.100.30150.6030.90451.2061.50751.34

Benchmark: linpk

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 512 - Model: ResNet-50

Ubuntu 22.10: The test quit with a non-zero exit status.

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultClear LinuxUbuntu 22.10369121513.2612.03

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansUbuntu 22.10400800120016002000SE +/- 4.66, N = 41689
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansUbuntu 22.1030060090012001500Min: 1676 / Avg: 1688.5 / Max: 1697

Java Test: Tradebeans

Clear Linux: The test quit with a non-zero exit status. E: Caused by: java.lang.ExceptionInInitializerError: Exception java.lang.ExceptionInInitializerError [in thread "main"]

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonClear LinuxUbuntu 22.10400800120016002000SE +/- 9.63, N = 4SE +/- 8.38, N = 415561710
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonClear LinuxUbuntu 22.1030060090012001500Min: 1536 / Avg: 1555.75 / Max: 1582Min: 1690 / Avg: 1710.25 / Max: 1725

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KClear LinuxUbuntu 22.1050100150200250SE +/- 1.88, N = 3SE +/- 2.00, N = 3224.04216.52-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KClear LinuxUbuntu 22.104080120160200Min: 221.65 / Avg: 224.04 / Max: 227.76Min: 213.87 / Avg: 216.52 / Max: 220.431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.100.77481.54962.32443.09923.874SE +/- 0.00241, N = 3SE +/- 0.00262, N = 33.393323.44347-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 3.36MIN: 3.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUClear LinuxUbuntu 22.10246810Min: 3.39 / Avg: 3.39 / Max: 3.4Min: 3.44 / Avg: 3.44 / Max: 3.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CClear LinuxUbuntu 22.107001400210028003500SE +/- 15.61, N = 3SE +/- 0.66, N = 33015.683262.42-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CClear LinuxUbuntu 22.106001200180024003000Min: 2989.78 / Avg: 3015.68 / Max: 3043.72Min: 3261.63 / Avg: 3262.42 / Max: 3263.731. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Clear Linux: 3.23. Ubuntu 22.10: Open MPI 4.1.4

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Clear LinuxUbuntu 22.1048121620SE +/- 0.01, N = 3SE +/- 0.19, N = 317.1616.16-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Clear LinuxUbuntu 22.1048121620Min: 17.14 / Avg: 17.16 / Max: 17.17Min: 15.78 / Avg: 16.16 / Max: 16.351. (CC) gcc options: -fvisibility=hidden -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultClear LinuxUbuntu 22.10612182430SE +/- 0.01, N = 3SE +/- 0.28, N = 327.1724.98-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultClear LinuxUbuntu 22.10612182430Min: 27.15 / Avg: 27.17 / Max: 27.18Min: 24.42 / Avg: 24.98 / Max: 25.261. (CC) gcc options: -fvisibility=hidden -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

Ubuntu 22.10: The test quit with a non-zero exit status.

Clear Linux: The test quit with a non-zero exit status.

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeClear LinuxUbuntu 22.10306090120150SE +/- 0.67, N = 3SE +/- 0.00, N = 3117132-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options:
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeClear LinuxUbuntu 22.1020406080100Min: 116 / Avg: 116.67 / Max: 118Min: 132 / Avg: 132 / Max: 1321. (CC) gcc options:

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Node.js Octane Benchmark

A Node.js version of the JavaScript Octane Benchmark. Learn more via the OpenBenchmarking.org test page.

Ubuntu 22.10: The test quit with a non-zero exit status. E: ReferenceError: GLOBAL is not defined

Clear Linux: The test quit with a non-zero exit status. E: ReferenceError: GLOBAL is not defined

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

Connections: 20

Ubuntu 22.10: The test quit with a non-zero exit status.

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

Connections: 1

Ubuntu 22.10: The test quit with a non-zero exit status.

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

358 Results Shown

NWChem
Blender
OpenVKL
Timed Linux Kernel Compilation
TensorFlow
memtier_benchmark
ONNX Runtime:
  ArcFace ResNet-100 - CPU - Standard
  GPT-2 - CPU - Standard
TensorFlow
High Performance Conjugate Gradient
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
JPEG XL libjxl:
  JPEG - 100
  PNG - 100
memtier_benchmark
OpenRadioss
memtier_benchmark
OSPRay Studio
IndigoBench
OpenRadioss
OpenSSL
Blender
FFmpeg:
  libx264 - Upload:
    FPS
    Seconds
Apache Spark:
  1000000 - 500 - SHA-512 Benchmark Time
  1000000 - 500 - Group By Test Time
  1000000 - 500 - Broadcast Inner Join Test Time
  1000000 - 500 - Calculate Pi Benchmark Using Dataframe
  1000000 - 500 - Calculate Pi Benchmark
  1000000 - 500 - Repartition Test Time
  1000000 - 500 - Inner Join Test Time
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
Renaissance
OSPRay Studio
HammerDB - MariaDB:
  64 - 250:
    Transactions Per Minute
    New Orders Per Minute
  64 - 100:
    Transactions Per Minute
    New Orders Per Minute
  32 - 100:
    Transactions Per Minute
    New Orders Per Minute
  32 - 250:
    Transactions Per Minute
    New Orders Per Minute
  8 - 100:
    Transactions Per Minute
    New Orders Per Minute
  16 - 250:
    Transactions Per Minute
    New Orders Per Minute
  16 - 100:
    Transactions Per Minute
    New Orders Per Minute
  8 - 250:
    Transactions Per Minute
    New Orders Per Minute
Renaissance
OSPRay Studio
Stress-NG:
  Atomic
  CPU Cache
Rodinia
OSPRay Studio
Blender
Java Gradle Build
Renaissance
Stress-NG:
  Futex
  Socket Activity
GROMACS
TensorFlow
Polyhedron Fortran Benchmarks
Timed Node.js Compilation
FFmpeg:
  libx265 - Upload:
    FPS
    Seconds
OSPRay Studio
FFmpeg:
  libx265 - Video On Demand:
    FPS
    Seconds
  libx265 - Platform:
    FPS
    Seconds
ONNX Runtime
Polyhedron Fortran Benchmarks
ONNX Runtime:
  bertsquad-12 - CPU - Standard
  yolov4 - CPU - Standard
  super-resolution-10 - CPU - Standard
FinanceBench
Renaissance
OpenRadioss
OSPRay Studio
FFmpeg:
  libx264 - Video On Demand:
    FPS
    Seconds
OpenRadioss
FFmpeg:
  libx264 - Platform:
    FPS
    Seconds
Appleseed
OSPRay Studio
TensorFlow
Xmrig
Polyhedron Fortran Benchmarks
SVT-HEVC
Renaissance
PyPerformance
Stress-NG:
  Context Switching
  Glibc C String Functions
NAS Parallel Benchmarks
TensorFlow
nginx:
  1000
  500
  200
  100
Cpuminer-Opt
Stress-NG:
  Mutex
  Crypto
FinanceBench
Rodinia
JPEG XL libjxl:
  JPEG - 80
  PNG - 80
Warsow
oneDNN
Warsow
Cpuminer-Opt
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
oneDNN
Blender
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
oneDNN
OSPRay Studio
Chaos Group V-RAY
JPEG XL libjxl:
  JPEG - 90
  PNG - 90
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
Xonotic
Renaissance
IndigoBench
Xonotic
Renaissance
Appleseed
Xmrig
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
Stress-NG
OpenSSL:
  RSA4096:
    verify/s
    sign/s
NAS Parallel Benchmarks
Timed CPython Compilation
NAS Parallel Benchmarks
Appleseed
7-Zip Compression:
  Decompression Rating
  Compression Rating
TensorFlow
SVT-AV1
Intel Open Image Denoise
oneDNN
Renaissance
Blender
Xonotic:
  3840 x 2160 - Ultra
  1920 x 1080 - Ultra
Rodinia
Polyhedron Fortran Benchmarks
miniBUDE:
  OpenMP - BM1:
    Billion Interactions/s
    GFInst/s
TensorFlow
NAS Parallel Benchmarks
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Stress-NG
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Node.js V8 Web Tooling Benchmark
NAMD
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
Polyhedron Fortran Benchmarks
Timed Linux Kernel Compilation
NAS Parallel Benchmarks
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
Stargate Digital Audio Workstation:
  96000 - 512
  96000 - 1024
NAS Parallel Benchmarks
FFmpeg:
  libx265 - Live:
    FPS
    Seconds
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
oneDNN
Apache Spark:
  1000000 - 100 - SHA-512 Benchmark Time
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 100 - Inner Join Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Calculate Pi Benchmark
Chia Blockchain VDF
Stress-NG
Chia Blockchain VDF
TensorFlow
RawTherapee
PyPerformance
Tesseract
PyHPC Benchmarks
Tesseract
SQLite Speedtest
Stress-NG
Polyhedron Fortran Benchmarks
Stress-NG
Cpuminer-Opt:
  x25x
  Garlicoin
Stress-NG
Cpuminer-Opt
Stress-NG:
  MMAP
  Glibc Qsort Data Sorting
  IO_uring
  NUMA
  Malloc
  SENDFILE
  MEMFD
  Matrix Math
  Semaphores
OpenFOAM:
  drivaerFastback, Small Mesh Size - Mesh Time
  drivaerFastback, Small Mesh Size - Execution Time
Cpuminer-Opt:
  Deepcoin
  Magi
  LBC, LBRY Credits
  Quad SHA-256, Pyrite
  Triple SHA-256, Onecoin
  Ringcoin
PyPerformance
Stargate Digital Audio Workstation
Timed Wasmer Compilation
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 256-QAM:
    UE Mb/s
    eNb Mb/s
Stargate Digital Audio Workstation
PyPerformance:
  float
  chaos
  regex_compile
FFmpeg:
  libx264 - Live:
    FPS
    Seconds
PyPerformance:
  pickle_pure_python
  2to3
WebP Image Encode
PyPerformance
DaCapo Benchmark
PyPerformance:
  pathlib
  go
Rodinia
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 64-QAM:
    UE Mb/s
    eNb Mb/s
TensorFlow
Rodinia
srsRAN
Polyhedron Fortran Benchmarks
Renaissance
EnCodec
PyPerformance
spaCy:
  en_core_web_lg
  en_core_web_trf
PyPerformance
Liquid-DSP
Renaissance
EnCodec:
  6 kbps
  3 kbps
  1.5 kbps
SVT-VP9
Renaissance
NAS Parallel Benchmarks
SVT-HEVC
TensorFlow
NAS Parallel Benchmarks
Polyhedron Fortran Benchmarks
Unvanquished
Polyhedron Fortran Benchmarks
Unvanquished:
  1920 x 1080 - Ultra
  3840 x 2160 - High
TensorFlow
Unvanquished
LibRaw
QuantLib
Polyhedron Fortran Benchmarks
AOM AV1
CloudSuite Graph Analytics
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
  4G PHY_DL_Test 100 PRB SISO 256-QAM:
    UE Mb/s
    eNb Mb/s
Polyhedron Fortran Benchmarks
WebP Image Encode
Polyhedron Fortran Benchmarks
PyBench
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
Polyhedron Fortran Benchmarks
AOM AV1
PHPBench
oneDNN
TensorFlow
Node.js Express HTTP Load Test
PyHPC Benchmarks
SVT-AV1
TensorFlow
AOM AV1
DaCapo Benchmark
AOM AV1
NAS Parallel Benchmarks
Polyhedron Fortran Benchmarks
CloudSuite In-Memory Analytics
SVT-HEVC
Polyhedron Fortran Benchmarks
oneDNN
TensorFlow
SVT-VP9
WebP Image Encode
SVT-VP9
SVT-AV1
Polyhedron Fortran Benchmarks
Timed CPython Compilation
DaCapo Benchmark:
  Tradebeans
  Jython
SVT-AV1
oneDNN
NAS Parallel Benchmarks
WebP Image Encode:
  Quality 100
  Default
ctx_clock