Core i9 13900K Linux Distros

Intel Core i9-13900K testing with a ASUS PRIME Z790-P WIFI (0602 BIOS) and AMD Radeon RX 6800 XT 16GB on Clear Linux OS 37600 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211066-NE-DISTROS7610
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 4 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 8 Tests
CPU Massive 23 Tests
Creator Workloads 19 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 3 Tests
Cryptography 4 Tests
Database Test Suite 4 Tests
Desktop Graphics 2 Tests
Encoding 6 Tests
Finance 2 Tests
Fortran Tests 4 Tests
Game Development 3 Tests
HPC - High Performance Computing 17 Tests
Imaging 4 Tests
Java 3 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 6 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 3 Tests
Multi-Core 25 Tests
Node.js + NPM Tests 3 Tests
NVIDIA GPU Compute 6 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 8 Tests
Programmer / Developer System Benchmarks 8 Tests
Python 3 Tests
Renderers 5 Tests
Scientific Computing 4 Tests
Software Defined Radio 2 Tests
Server 9 Tests
Server CPU Tests 19 Tests
Single-Threaded 6 Tests
Video Encoding 5 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Ubuntu 22.10
November 02 2022
  1 Day, 1 Hour, 49 Minutes
Clear Linux
November 05 2022
  14 Hours, 3 Minutes
Invert Hiding All Results Option
  19 Hours, 56 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 13900K Linux DistrosProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionUbuntu 22.10Clear LinuxIntel Core i9-13900K (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB1000GB Western Digital WDS100T1X0E-00AFY0AMD Radeon RX 6800 XT 16GB (2575/1000MHz)Realtek ALC897ASUS VP28URealtek RTL8125 2.5GbE + Intel Device 7a70Ubuntu 22.105.19.0-23-generic (x86_64)GNOME Shell 43.0X Server + Wayland4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)1.3.224GCC 12.2.0ext43840x2160Clear Linux OS 376006.0.7-1207.native (x86_64)X Server 1.21.1.44.6 Mesa 22.3.0-devel (LLVM 14.0.6 DRM 3.48)1.3.230GCC 12.2.1 20221031 releases/gcc-12.2.0-182-gfaac1fccd7 + Clang 14.0.6 + LLVM 14.0.6OpenBenchmarking.orgKernel Details- Ubuntu 22.10: Transparent Huge Pages: madvise- Clear Linux: Transparent Huge Pages: alwaysCompiler Details- Ubuntu 22.10: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Clear Linux: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=skylake-avx512 --with-zstd Processor Details- Ubuntu 22.10: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x10e - Thermald 2.5.1- Clear Linux: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10e - Thermald 2.5.1Graphics Details- BAR1 / Visible vRAM Size: 16368 MB - vBIOS Version: 113-D4120500-101Java Details- Ubuntu 22.10: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu1)- Clear Linux: OpenJDK Runtime Environment (build 18.0.2-internal+0-adhoc.mockbuild.corretto-18-18.0.2.9.1)Python Details- Ubuntu 22.10: Python 3.10.7- Clear Linux: Python 3.11.0Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected Environment Details- Clear Linux: FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags" MESA_GLSL_CACHE_DISABLE=0 FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z,now -Wl,-z,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx""

Ubuntu 22.10 vs. Clear Linux ComparisonPhoronix Test SuiteBaseline+59.2%+59.2%+118.4%+118.4%+177.6%+177.6%236.8%186.7%107.4%65.7%60%48%46.6%45.9%45.7%44.1%43.2%39.9%38.7%37.7%32.7%31.5%30%29.9%29.5%29.3%28.3%25.3%24.7%21.8%21.6%21.6%21.2%20.5%20.1%19.7%19.6%19.3%19.1%19.1%18.2%16.7%16.5%15.5%14.4%12.9%12.8%12.1%12.1%12.1%11.3%10.8%10.3%10.2%9.9%9.7%9.4%9.3%8.8%8.7%8.3%7.8%7.5%7.5%7.5%7.2%7.1%7.1%7%6.6%6.3%6.3%6.2%6.2%6.2%6.2%6.2%5.9%5.9%5.8%5.8%5.5%5.4%5.3%5%5%4.7%4.7%4.7%4.6%4.1%4.1%3.9%3.6%3.6%3.6%3.6%3.5%3.3%3.3%3.2%3.1%2.9%2.6%2.5%2.5%2.4%2.4%2.3%2.2%2.2%2%2%ArcFace ResNet-100 - CPU - StandardS.V.M.PP.B.Sgoraytracenbodypython_startuppickle_pure_pythonH2floatcrypto_pyaeschaosP.P.BM.M.B.S.T - f32 - CPUsuper-resolution-10 - CPU - StandardCPU - BedroomSocket ActivityMalloc2to3regex_compiledjango_templateMemory CopyingCryptoIP Shapes 3D - f32 - CPUJPEG - 90JPEG - 80PNG - 90PNG - 80Scala DottyContext SwitchingpathlibA.S.Pyolov4 - CPU - StandardT.F.A.T.T17.9%8 - 256 - 57Redis - 50 - 1:1Repo OpenMP15.7%Savina Reactors.IOMEMFDGPT-2 - CPU - StandardC.S.TF.H.RD.B.s - f32 - CPUTime To Compileallmodconfig11.4%G.Q.D.S19, Long Mode - Compression SpeedMagi10.7%Redis - 50 - 1:10Timed Time - Size 1,000Default10.2%JythonG.A.U.J.FPNG - 100C240 BuckyballDefault3840 x 2160 - UltraBonds OpenMP8.4%Blake-2 S8.3%Q.1.LEP.C8.2%Forking8.2%Ringcoin8%19 - D.SMMAPApache Spark ALSRand ForestCPU Cache7.3%3840 x 2160 - HighMutex19, Long Mode - D.S1920 x 1080 - HighGarlicoin6.7%Q.1.L.H.C1.R.W.A.D.T.R1920 x 1080 - Ultra19 - Compression SpeedQuality 100libx265 - Livelibx265 - LiveD.R6.1%O.S6%libx265 - Platformlibx265 - Platformlibx265 - Video On Demandlibx265 - Video On DemandA.U.C.TFT.C5.4%Material Tester5.4%1 - Bosphorus 4K1.R.W.A.D.F.R.C.CFutex5.2%Apache Spark BayesALS Movie LensEP.D4.9%libx265 - Uploadlibx265 - Upload1.R.W.A.D.S.RSP.B4.7%SHA256IP Shapes 1D - f32 - CPU4.5%R.N.N.T - f32 - CPUOpenMP LavaMDT.B.TR.B.P.L.O3.7%bertsquad-12 - CPU - StandardRT.ldr_alb_nrm.3840x21603.6%json_loadsQ.1.H.CNUMAPreset 12 - Bosphorus 4KOpenMP - BM23.4%OpenMP - BM23.4%OpenMP LeukocyteIS.DSemaphores3.3%VMAF Optimized - Bosphorus 4Kdefconfig3.2%Redis - 50 - 10:1BT.C3%JPEG - 100Barbershop - CPU-Only2.6%LBC, LBRY CreditsLU.C2.5%OpenMP HotSpot3DSkeincoinlibx264 - Platformlibx264 - PlatformP.S.O - Bosphorus 4KPreset 4 - Bosphorus 4Klibx264 - Video On Demandlibx264 - Video On DemandONNX RuntimeStress-NGPHPBenchPyPerformancePyPerformancePyPerformancePyPerformancePyPerformanceDaCapo BenchmarkPyPerformancePyPerformancePyPerformanceLibRawoneDNNONNX RuntimeIndigoBenchStress-NGStress-NGPyPerformancePyPerformancePyPerformanceNode.js Express HTTP Load TestStress-NGStress-NGoneDNNJPEG XL libjxlJPEG XL libjxlJPEG XL libjxlJPEG XL libjxlRenaissanceStress-NGPyPerformanceRenaissanceONNX RuntimePyBenchHigh Performance Conjugate GradientLiquid-DSPmemtier_benchmarkFinanceBenchRenaissanceStress-NGONNX Runtimectx_clockRenaissanceoneDNNTimed Wasmer CompilationTimed Linux Kernel CompilationStress-NGZstd CompressionCpuminer-Optmemtier_benchmarkSQLite SpeedtestTimed CPython CompilationDaCapo BenchmarkRenaissanceJPEG XL libjxlNWChemWebP Image EncodeUnvanquishedFinanceBenchCpuminer-OptWebP Image EncodeNAS Parallel BenchmarksStress-NGCpuminer-OptZstd CompressionStress-NGRenaissanceRenaissanceStress-NGUnvanquishedStress-NGZstd CompressionUnvanquishedCpuminer-OptWebP Image EncodeQuantLibClickHouseUnvanquishedZstd CompressionWebP Image EncodeFFmpegFFmpeg7-Zip CompressionRodiniaFFmpegFFmpegFFmpegFFmpegRenaissanceNAS Parallel BenchmarksAppleseedSVT-HEVCClickHouseStress-NGRenaissanceRenaissanceNAS Parallel BenchmarksFFmpegFFmpegClickHouseNAS Parallel BenchmarksOpenSSLoneDNNoneDNNRodiniaRawTherapeeTimed CPython CompilationONNX RuntimeIntel Open Image DenoisePyPerformanceWebP Image EncodeStress-NGSVT-AV1miniBUDEminiBUDERodiniaNAS Parallel BenchmarksStress-NGSVT-VP9Timed Linux Kernel Compilationmemtier_benchmarkNAS Parallel BenchmarksJPEG XL libjxlBlenderCpuminer-OptNAS Parallel BenchmarksRodiniaCpuminer-OptFFmpegFFmpegSVT-VP9Node.js V8 Web Tooling BenchmarkSVT-AV1FFmpegFFmpegUbuntu 22.10Clear Linux

Core i9 13900K Linux Distrosnwchem: C240 Buckyballblender: Barbershop - CPU-Onlyopenvkl: vklBenchmark ISPCbuild-linux-kernel: allmodconfigtensorflow: CPU - 256 - ResNet-50memtier-benchmark: Redis - 50 - 1:1onnx: ArcFace ResNet-100 - CPU - Standardonnx: GPT-2 - CPU - Standardtensorflow: CPU - 512 - GoogLeNethpcg: minibude: OpenMP - BM2minibude: OpenMP - BM2jpegxl: JPEG - 100jpegxl: PNG - 100memtier-benchmark: Redis - 50 - 10:1openradioss: Cell Phone Drop Testmemtier-benchmark: Redis - 50 - 1:10ospray-studio: 3 - 4K - 32 - Path Tracerindigobench: CPU - Supercaropenradioss: Bird Strike on Windshieldopenssl: SHA256blender: Pabellon Barcelona - CPU-Onlyffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadspark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timeclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacherenaissance: ALS Movie Lensospray-studio: 1 - 4K - 32 - Path Tracerhammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 100hammerdb-mariadb: 64 - 100hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 250hammerdb-mariadb: 32 - 250hammerdb-mariadb: 8 - 100hammerdb-mariadb: 8 - 100hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 100hammerdb-mariadb: 16 - 100hammerdb-mariadb: 8 - 250hammerdb-mariadb: 8 - 250renaissance: Genetic Algorithm Using Jenetics + Futuresospray-studio: 3 - 4K - 1 - Path Tracerstress-ng: Atomicstress-ng: CPU Cacherodinia: OpenMP HotSpot3Dospray-studio: 3 - 1080p - 32 - Path Tracerblender: Classroom - CPU-Onlyjava-gradle-perf: Reactorrenaissance: Akka Unbalanced Cobwebbed Treestress-ng: Futexstress-ng: Socket Activitygromacs: MPI CPU - water_GMX50_baretensorflow: CPU - 256 - GoogLeNetpolyhedron: fatigue2build-nodejs: Time To Compileffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadospray-studio: 1 - 1080p - 32 - Path Tracerffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx265 - Platformffmpeg: libx265 - Platformonnx: fcn-resnet101-11 - CPU - Standardpolyhedron: tfft2onnx: bertsquad-12 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: super-resolution-10 - CPU - Standardfinancebench: Bonds OpenMPrenaissance: Apache Spark PageRankopenradioss: Bumper Beamospray-studio: 3 - 1080p - 1 - Path Tracerffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandopenradioss: Rubber O-Ring Seal Installationffmpeg: libx264 - Platformffmpeg: libx264 - Platformappleseed: Emilyospray-studio: 1 - 1080p - 1 - Path Tracertensorflow: CPU - 512 - AlexNetxmrig: Monero - 1Mpolyhedron: gas_dyn2svt-hevc: 1 - Bosphorus 4Krenaissance: Scala Dottypyperformance: python_startupstress-ng: Context Switchingstress-ng: Glibc C String Functionsnpb: SP.Ctensorflow: CPU - 64 - ResNet-50nginx: 1000nginx: 500nginx: 200nginx: 100cpuminer-opt: Blake-2 Sstress-ng: Mutexstress-ng: Cryptofinancebench: Repo OpenMProdinia: OpenMP LavaMDjpegxl: JPEG - 80jpegxl: PNG - 80warsow: 1920 x 1080onednn: Deconvolution Batch shapes_1d - f32 - CPUwarsow: 3840 x 2160cpuminer-opt: Skeincoindeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - f32 - CPUblender: Fishy Cat - CPU-Onlydeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - f32 - CPUospray-studio: 1 - 4K - 1 - Path Tracerv-ray: CPUjpegxl: JPEG - 90jpegxl: PNG - 90openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUxonotic: 3840 x 2160 - Ultimaterenaissance: Apache Spark ALSindigobench: CPU - Bedroomxonotic: 1920 x 1080 - Ultimaterenaissance: Finagle HTTP Requestsappleseed: Material Testerxmrig: Wownero - 1Mopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUstress-ng: Vector Mathopenssl: RSA4096openssl: RSA4096npb: BT.Cbuild-python: Released Build, PGO + LTO Optimizednpb: FT.Cappleseed: Disney Materialcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingtensorflow: CPU - 256 - AlexNetsvt-av1: Preset 4 - Bosphorus 4Koidn: RT.ldr_alb_nrm.3840x2160onednn: IP Shapes 1D - f32 - CPUrenaissance: Savina Reactors.IOblender: BMW27 - CPU-Onlyxonotic: 3840 x 2160 - Ultraxonotic: 1920 x 1080 - Ultrarodinia: OpenMP Leukocytepolyhedron: channel2minibude: OpenMP - BM1minibude: OpenMP - BM1tensorflow: CPU - 32 - ResNet-50npb: EP.Ddeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamstress-ng: CPU Stressdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamnode-web-tooling: namd: ATPase Simulation - 327,506 Atomscompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedpolyhedron: mp_prop_designbuild-linux-kernel: defconfignpb: IS.Ddeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamstargate: 96000 - 512stargate: 96000 - 1024npb: LU.Cffmpeg: libx265 - Liveffmpeg: libx265 - Livedeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speeddeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Calculate Pi Benchmarkchia-vdf: Square Plain C++stress-ng: Memory Copyingchia-vdf: Square Assembly Optimizedtensorflow: CPU - 64 - GoogLeNetrawtherapee: Total Benchmark Timepyperformance: django_templatetesseract: 1920 x 1080pyhpc: CPU - Numpy - 16384 - Isoneutral Mixingtesseract: 3840 x 2160sqlite-speedtest: Timed Time - Size 1,000stress-ng: x86_64 RdRandpolyhedron: test_fpu2stress-ng: System V Message Passingcpuminer-opt: x25xcpuminer-opt: Garlicoinstress-ng: Forkingcpuminer-opt: scryptstress-ng: MMAPstress-ng: Glibc Qsort Data Sortingstress-ng: IO_uringstress-ng: NUMAstress-ng: Mallocstress-ng: SENDFILEstress-ng: MEMFDstress-ng: Matrix Mathstress-ng: Semaphoresopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timecpuminer-opt: Deepcoincpuminer-opt: Magicpuminer-opt: LBC, LBRY Creditscpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: Ringcoinpyperformance: raytracestargate: 44100 - 512build-wasmer: Time To Compilesrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMstargate: 44100 - 1024pyperformance: floatpyperformance: chaospyperformance: regex_compileffmpeg: libx264 - Liveffmpeg: libx264 - Livepyperformance: pickle_pure_pythonpyperformance: 2to3webp: Quality 100, Lossless, Highest Compressionpyperformance: nbodydacapobench: H2pyperformance: pathlibpyperformance: gorodinia: OpenMP CFD Solversrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMtensorflow: CPU - 16 - ResNet-50rodinia: OpenMP Streamclustersrsran: OFDM_Testpolyhedron: induct2renaissance: Apache Spark Bayesencodec: 24 kbpspyperformance: crypto_pyaesspacy: en_core_web_lgspacy: en_core_web_trfpyperformance: json_loadsliquid-dsp: 8 - 256 - 57renaissance: Rand Forestencodec: 6 kbpsencodec: 3 kbpsencodec: 1.5 kbpssvt-vp9: VMAF Optimized - Bosphorus 4Krenaissance: In-Memory Database Shootoutnpb: CG.Csvt-hevc: 10 - Bosphorus 4Ktensorflow: CPU - 32 - GoogLeNetnpb: SP.Bpolyhedron: rnflowunvanquished: 3840 x 2160 - Ultrapolyhedron: doducunvanquished: 1920 x 1080 - Ultraunvanquished: 3840 x 2160 - Hightensorflow: CPU - 64 - AlexNetunvanquished: 1920 x 1080 - Highlibraw: Post-Processing Benchmarkquantlib: polyhedron: capacitaaom-av1: Speed 6 Realtime - Bosphorus 4Kcloudsuite-ga: srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMpolyhedron: proteinwebp: Quality 100, Losslesspolyhedron: acpybench: Total For Average Test Timessrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMpolyhedron: airaom-av1: Speed 8 Realtime - Bosphorus 4Kphpbench: PHP Benchmark Suiteonednn: IP Shapes 3D - f32 - CPUtensorflow: CPU - 32 - AlexNetnode-express-loadtest: pyhpc: CPU - Numpy - 16384 - Equation of Statesvt-av1: Preset 8 - Bosphorus 4Ktensorflow: CPU - 16 - GoogLeNetaom-av1: Speed 9 Realtime - Bosphorus 4Kdacapobench: Tradesoapaom-av1: Speed 10 Realtime - Bosphorus 4Knpb: MG.Cpolyhedron: aermodcloudsuite-ma: svt-hevc: 7 - Bosphorus 4Kpolyhedron: mdbxonednn: Convolution Batch Shapes Auto - f32 - CPUtensorflow: CPU - 16 - AlexNetsvt-vp9: Visual Quality Optimized - Bosphorus 4Kwebp: Quality 100, Highest Compressionsvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 4Kpolyhedron: linpkbuild-python: Defaultdacapobench: Tradebeansdacapobench: Jythonsvt-av1: Preset 12 - Bosphorus 4Konednn: Deconvolution Batch shapes_3d - f32 - CPUnpb: EP.Cwebp: Quality 100webp: Defaultctx-clock: Context Switch Timenginx: 1Ubuntu 22.10Clear Linux4268.4576.35161454.88037.063124943.7259810578109.3410.154922.825570.6231.051.063127334.7568.103456218.1818757911.802183.7535956999233178.9519.48129.6191382872.032.440.793.2652.241.090.95308.97307.65301.667650.3157185893323844090768390638831538008828613568289121382368654137140862773718272163310381063.35758344361.7598.7749.66046495147.64142.7987183.43538590.3124287.071.414109.1021.88256.70131.9679.003903765.12116.3265.14116.3013312.11210687684230942.0071621902.7114.66147376.3299.25108.6276.2999.29164.9399961242264.599652.525.835.79454.37.3914703175.324307014.9415473.9037.68192021.95203069.78205841.24204910.4676558716748823.8842378.7919315.82942784.64513.2513.58965.87.63132951.3156343967.893312.0497964.435512.18562150.8375.32227.141452.54401112.8248312873413.0913.432222.923.582238.023.551570.465.08527.85715442026.44.051540.60370831992.690.77577816463.2438.2318.23125.9763.4810.96728.9151.14468.508.72916.0521.43372.8514.631638.990.7233018.171.6414593.71119832.03358806.75496.949771.29171.50222382.8184.859545139981182153256.553.0040.571.904304193.851.61692.0609456696.928939249.97629.316.584414.59138.513049.53236.601550.395390.049511.104790.480911.051851634.5424.797040.3232115.3492103.896626.330.609824758.380.525.7741.3961263.16159.940974.686031.070632.18224.8184254.85469553311.89182.0827.7412.492580.02834887.750.978.2359153.127317.590056.832910.349796.58601.4928361.990.770.931.023.452.6351.762529337385.39269067111.2032.34622.2999.44260.004896.016832.80682767.7613.9913432520.511128.003496.70113514.43333.58742.41411.3927676.33681.8036241645.59588014.942049.37109789.423538392.4127.343094151.34484185201176.02525501986804344605423.942086.14535330.245199.2677.56.42283546.744.982.1353.2414.301971580.9161.420918.771126.168182.0624.939.707.29719560000011.07693.821.85550.720855252311.5859746667384.519.30319.18218.548142.191957.98583.03202.12113.1622786.609.54664.73.38671.3665.9235.17683.970.325198.75.1344.879985107.6224.3242.6683.76.932.303.77474233.3633.10.9366.0516175964.07923206.94181470.00177.256117.3985.34163886.5224905.502.7710160105.393.025.77228162.47123.495.00155.19147.8671.3412.02916891710216.5173.443473262.4216.1624.981323905.5591.51162506.5743639054.062014119428.6154722.072551.7941.081.163223992.9167.933811036.8711.988184.9637622237897178.7919.84127.291416012328.44322.07317.577288.7969.4341748.6292.0548.431146.416805.93363240.6131581.601.40133.4775.4468.89109.9669.01109.771331254818907933544.8910161597.8113.5277.8597.31108.0078.1296.96162.8295966.10379.65.0417592139.664366568.3115318.8370662017946052.0451626.5922344.25820381.32216.0616.31953.36.808361601812065.8175.331132.542858715.9216.181885.45.3281777.595.656227118329.56358535.25428.948313.69177.80221228.7583.8174011319941818053.0690.551.989413631.051.5848.36916.627415.6842906.9451871.9126.900.621535127.185.542.7021304.9651989.89193.2826.135235.956.41.084469212.0631.11917.3998.32400.004909.874829.75782742.7938506364.421138.853275.84104947.27337.82798.15457.7927808.53706.1647064332.51595183.172343.51110071.383426675.99188701061.90538972002874366635020.1913026.99132.432.163.5358.5914.081351220.9741.514357.3567.66.2277.732660.935.411.11003120000357.8146.718580.50201.7121769.33722.6713.0713.9731.897.565528.42.4940133554233.35354227350.00176.30224835.87106.105.75329121.675.18158.73149.92313.2591556224.0383.393323015.6817.1627.17117OpenBenchmarking.org

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballUbuntu 22.10Clear Linux90018002700360045004268.43905.5-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lz1. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lm -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyUbuntu 22.10Clear Linux130260390520650SE +/- 0.42, N = 3SE +/- 0.53, N = 3576.35591.51
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyUbuntu 22.10Clear Linux100200300400500Min: 575.84 / Avg: 576.35 / Max: 577.19Min: 590.82 / Avg: 591.51 / Max: 592.55

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCUbuntu 22.10Clear Linux4080120160200SE +/- 0.67, N = 3SE +/- 1.61, N = 12161162MIN: 11 / MAX: 1931MIN: 10 / MAX: 1893
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCUbuntu 22.10Clear Linux306090120150Min: 160 / Avg: 161.33 / Max: 162Min: 151 / Avg: 161.75 / Max: 171

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigUbuntu 22.10Clear Linux110220330440550SE +/- 0.43, N = 3SE +/- 0.35, N = 3454.88506.57
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigUbuntu 22.10Clear Linux90180270360450Min: 454.21 / Avg: 454.88 / Max: 455.67Min: 505.97 / Avg: 506.57 / Max: 507.18

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.02, N = 337.06
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 22.10816243240Min: 37.03 / Avg: 37.06 / Max: 37.09

Device: CPU - Batch Size: 256 - Model: ResNet-50

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Ubuntu 22.10Clear Linux800K1600K2400K3200K4000KSE +/- 65972.61, N = 15SE +/- 46119.45, N = 153124943.723639054.061. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Ubuntu 22.10Clear Linux600K1200K1800K2400K3000KMin: 2798750.1 / Avg: 3124943.72 / Max: 3426413.94Min: 3438064.74 / Avg: 3639054.06 / Max: 4030627.511. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux400800120016002000SE +/- 0.17, N = 3SE +/- 243.47, N = 125982014-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux400800120016002000Min: 597.5 / Avg: 597.67 / Max: 598Min: 615.5 / Avg: 2014.04 / Max: 24991. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux3K6K9K12K15KSE +/- 19.87, N = 3SE +/- 93.02, N = 121057811942-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux2K4K6K8K10KMin: 10556.5 / Avg: 10578.33 / Max: 10618Min: 10926 / Avg: 11941.92 / Max: 120961. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.36, N = 3109.34
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetUbuntu 22.1020406080100Min: 108.63 / Avg: 109.34 / Max: 109.84

Device: CPU - Batch Size: 512 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Ubuntu 22.10Clear Linux3691215SE +/- 0.01984, N = 3SE +/- 0.02143, N = 310.154908.61547-lmpi_cxx1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Ubuntu 22.10Clear Linux3691215Min: 10.12 / Avg: 10.15 / Max: 10.19Min: 8.58 / Avg: 8.62 / Max: 8.651. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 22.10Clear Linux510152025SE +/- 0.05, N = 3SE +/- 0.12, N = 322.8322.071. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 22.10Clear Linux510152025Min: 22.73 / Avg: 22.82 / Max: 22.91Min: 21.91 / Avg: 22.07 / Max: 22.321. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 22.10Clear Linux120240360480600SE +/- 1.24, N = 3SE +/- 3.10, N = 3570.62551.791. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 22.10Clear Linux100200300400500Min: 568.35 / Avg: 570.62 / Max: 572.63Min: 547.85 / Avg: 551.79 / Max: 557.921. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Ubuntu 22.10Clear Linux0.2430.4860.7290.9721.215SE +/- 0.00, N = 3SE +/- 0.01, N = 31.051.08-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Ubuntu 22.10Clear Linux246810Min: 1.04 / Avg: 1.05 / Max: 1.05Min: 1.07 / Avg: 1.08 / Max: 1.091. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Ubuntu 22.10Clear Linux0.2610.5220.7831.0441.305SE +/- 0.00, N = 3SE +/- 0.00, N = 31.061.16-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Ubuntu 22.10Clear Linux246810Min: 1.06 / Avg: 1.06 / Max: 1.07Min: 1.16 / Avg: 1.16 / Max: 1.171. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Ubuntu 22.10Clear Linux700K1400K2100K2800K3500KSE +/- 38170.43, N = 4SE +/- 27586.98, N = 153127334.753223992.911. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Ubuntu 22.10Clear Linux600K1200K1800K2400K3000KMin: 3032636.97 / Avg: 3127334.75 / Max: 3218563.92Min: 3092208.2 / Avg: 3223992.91 / Max: 35233071. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestUbuntu 22.10Clear Linux1530456075SE +/- 0.98, N = 3SE +/- 0.51, N = 1568.1067.93
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestUbuntu 22.10Clear Linux1326395265Min: 66.15 / Avg: 68.1 / Max: 69.2Min: 63.45 / Avg: 67.93 / Max: 70.59

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Ubuntu 22.10Clear Linux800K1600K2400K3200K4000KSE +/- 41614.46, N = 3SE +/- 30336.93, N = 153456218.183811036.871. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Ubuntu 22.10Clear Linux700K1400K2100K2800K3500KMin: 3374885.73 / Avg: 3456218.18 / Max: 3512183.6Min: 3606443.66 / Avg: 3811036.87 / Max: 4016242.41. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1040K80K120K160K200KSE +/- 169.22, N = 31875791. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KMin: 187374 / Avg: 187579.33 / Max: 1879151. (CXX) g++ options: -O3 -lm -ldl

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarUbuntu 22.10Clear Linux3691215SE +/- 0.02, N = 3SE +/- 0.09, N = 1511.8011.99
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarUbuntu 22.10Clear Linux3691215Min: 11.78 / Avg: 11.8 / Max: 11.83Min: 11.68 / Avg: 11.99 / Max: 12.94

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldUbuntu 22.10Clear Linux4080120160200SE +/- 2.45, N = 3SE +/- 1.10, N = 3183.75184.96
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldUbuntu 22.10Clear Linux306090120150Min: 178.93 / Avg: 183.75 / Max: 186.96Min: 182.76 / Avg: 184.96 / Max: 186.26

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Ubuntu 22.10Clear Linux8000M16000M24000M32000M40000MSE +/- 90907848.90, N = 3SE +/- 14916651.04, N = 33595699923337622237897-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Ubuntu 22.10Clear Linux7000M14000M21000M28000M35000MMin: 35775687680 / Avg: 35956999233.33 / Max: 36059372580Min: 37593759900 / Avg: 37622237896.67 / Max: 376441757401. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyUbuntu 22.10Clear Linux4080120160200SE +/- 0.13, N = 3SE +/- 0.16, N = 3178.95178.79
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyUbuntu 22.10Clear Linux306090120150Min: 178.69 / Avg: 178.95 / Max: 179.09Min: 178.55 / Avg: 178.79 / Max: 179.08

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10Clear Linux510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 319.4819.84-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10Clear Linux510152025Min: 19.44 / Avg: 19.48 / Max: 19.51Min: 19.81 / Avg: 19.84 / Max: 19.881. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10Clear Linux306090120150SE +/- 0.15, N = 3SE +/- 0.14, N = 3129.62127.29-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10Clear Linux20406080100Min: 129.39 / Avg: 129.62 / Max: 129.89Min: 127 / Avg: 127.29 / Max: 127.441. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 22.100.45680.91361.37041.82722.284SE +/- 0.02, N = 152.03
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 22.10246810Min: 1.92 / Avg: 2.03 / Max: 2.16

Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeUbuntu 22.100.5491.0981.6472.1962.745SE +/- 0.01, N = 152.44
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeUbuntu 22.10246810Min: 2.36 / Avg: 2.44 / Max: 2.55

Row Count: 1000000 - Partitions: 500 - Group By Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeUbuntu 22.100.17780.35560.53340.71120.889SE +/- 0.01, N = 150.79
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeUbuntu 22.10246810Min: 0.71 / Avg: 0.79 / Max: 0.88

Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.73351.4672.20052.9343.6675SE +/- 0.04, N = 153.26
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 22.10246810Min: 3.06 / Avg: 3.26 / Max: 3.57

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.06, N = 1552.24
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkUbuntu 22.101020304050Min: 51.91 / Avg: 52.24 / Max: 52.63

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeUbuntu 22.100.24530.49060.73590.98121.2265SE +/- 0.01, N = 151.09
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeUbuntu 22.10246810Min: 1.02 / Avg: 1.09 / Max: 1.15

Row Count: 1000000 - Partitions: 500 - Repartition Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeUbuntu 22.100.21380.42760.64140.85521.069SE +/- 0.02, N = 150.95
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeUbuntu 22.10246810Min: 0.85 / Avg: 0.95 / Max: 1.19

Row Count: 1000000 - Partitions: 500 - Inner Join Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunUbuntu 22.10Clear Linux70140210280350SE +/- 1.54, N = 15SE +/- 0.38, N = 3308.97328.44MIN: 24.68 / MAX: 30000MIN: 25.56 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunUbuntu 22.10Clear Linux60120180240300Min: 300.2 / Avg: 308.97 / Max: 323.13Min: 327.69 / Avg: 328.44 / Max: 328.91. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunUbuntu 22.10Clear Linux70140210280350SE +/- 1.50, N = 15SE +/- 1.18, N = 3307.65322.07MIN: 24.43 / MAX: 30000MIN: 25.5 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunUbuntu 22.10Clear Linux60120180240300Min: 293.18 / Avg: 307.65 / Max: 314.36Min: 320.63 / Avg: 322.07 / Max: 324.411. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheUbuntu 22.10Clear Linux70140210280350SE +/- 2.14, N = 15SE +/- 4.54, N = 3301.66317.57MIN: 24.67 / MAX: 30000MIN: 24.74 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheUbuntu 22.10Clear Linux60120180240300Min: 275.67 / Avg: 301.66 / Max: 309.92Min: 310.62 / Avg: 317.57 / Max: 326.121. ClickHouse server version 22.5.4.19 (official build).

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 22.10Clear Linux16003200480064008000SE +/- 72.83, N = 3SE +/- 10.96, N = 37650.37288.7MIN: 7526.04 / MAX: 8473.18MIN: 7273.03 / MAX: 8022.2
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 22.10Clear Linux13002600390052006500Min: 7526.04 / Avg: 7650.27 / Max: 7778.26Min: 7273.03 / Avg: 7288.66 / Max: 7309.77

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KSE +/- 533.75, N = 31571851. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KMin: 156146 / Avg: 157185.33 / Max: 1579161. (CXX) g++ options: -O3 -lm -ldl

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K893321. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.108K16K24K32K40K384401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K907681. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.108K16K24K32K40K390631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K883151. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.108K16K24K32K40K380081. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K828611. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.108K16K24K32K40K356821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K891211. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.108K16K24K32K40K382361. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K865411. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.108K16K24K32K40K371401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K862771. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.108K16K24K32K40K371821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.1015K30K45K60K75K721631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.107K14K21K28K35K310381. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesUbuntu 22.10Clear Linux2004006008001000SE +/- 8.08, N = 15SE +/- 2.62, N = 31063.3969.4MIN: 960.83 / MAX: 1135.87MIN: 930.74 / MAX: 1016.31
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesUbuntu 22.10Clear Linux2004006008001000Min: 1007.22 / Avg: 1063.25 / Max: 1109.69Min: 965.17 / Avg: 969.35 / Max: 974.18

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1012002400360048006000SE +/- 5.24, N = 357581. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1010002000300040005000Min: 5751 / Avg: 5757.67 / Max: 57681. (CXX) g++ options: -O3 -lm -ldl

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicUbuntu 22.10Clear Linux70K140K210K280K350KSE +/- 7871.09, N = 15SE +/- 10614.09, N = 15344361.75341748.62-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicUbuntu 22.10Clear Linux60K120K180K240K300KMin: 300970.12 / Avg: 344361.75 / Max: 401718.54Min: 300534.66 / Avg: 341748.62 / Max: 426989.821. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheUbuntu 22.10Clear Linux20406080100SE +/- 1.23, N = 15SE +/- 1.44, N = 1598.7792.05-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheUbuntu 22.10Clear Linux20406080100Min: 89.65 / Avg: 98.77 / Max: 106.79Min: 83.59 / Avg: 92.05 / Max: 103.61. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DUbuntu 22.10Clear Linux1122334455SE +/- 0.32, N = 15SE +/- 0.17, N = 349.6648.431. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DUbuntu 22.10Clear Linux1020304050Min: 48.54 / Avg: 49.66 / Max: 52.85Min: 48.1 / Avg: 48.43 / Max: 48.691. (CXX) g++ options: -O2 -lOpenCL

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1010K20K30K40K50KSE +/- 33.28, N = 3464951. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.108K16K24K32K40KMin: 46451 / Avg: 46494.67 / Max: 465601. (CXX) g++ options: -O3 -lm -ldl

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyUbuntu 22.10Clear Linux306090120150SE +/- 0.34, N = 3SE +/- 0.21, N = 3147.64146.41
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyUbuntu 22.10Clear Linux306090120150Min: 147.17 / Avg: 147.64 / Max: 148.3Min: 146.19 / Avg: 146.41 / Max: 146.82

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorUbuntu 22.10306090120150SE +/- 1.44, N = 6142.80
OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorUbuntu 22.10306090120150Min: 138.74 / Avg: 142.8 / Max: 149.19

Gradle Build: Reactor

Clear Linux: The test quit with a non-zero exit status.

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 22.10Clear Linux15003000450060007500SE +/- 20.79, N = 3SE +/- 40.50, N = 37183.46805.9MIN: 5471.82 / MAX: 7209.81MIN: 5063.5 / MAX: 6882.75
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 22.10Clear Linux12002400360048006000Min: 7142.41 / Avg: 7183.43 / Max: 7209.81Min: 6745.35 / Avg: 6805.86 / Max: 6882.75

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexUbuntu 22.10Clear Linux800K1600K2400K3200K4000KSE +/- 33712.97, N = 15SE +/- 99609.06, N = 123538590.313363240.61-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexUbuntu 22.10Clear Linux600K1200K1800K2400K3000KMin: 3318227.7 / Avg: 3538590.31 / Max: 3760667.38Min: 2615644.77 / Avg: 3363240.61 / Max: 39333441. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityUbuntu 22.10Clear Linux7K14K21K28K35KSE +/- 568.16, N = 12SE +/- 483.37, N = 1524287.0731581.60-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityUbuntu 22.10Clear Linux5K10K15K20K25KMin: 21269.06 / Avg: 24287.07 / Max: 29385.41Min: 26599.36 / Avg: 31581.6 / Max: 33343.371. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareUbuntu 22.10Clear Linux0.31820.63640.95461.27281.591SE +/- 0.002, N = 3SE +/- 0.003, N = 31.4141.401-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareUbuntu 22.10Clear Linux246810Min: 1.41 / Avg: 1.41 / Max: 1.42Min: 1.4 / Avg: 1.4 / Max: 1.411. (CXX) g++ options: -O3

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.20, N = 3109.10
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetUbuntu 22.1020406080100Min: 108.73 / Avg: 109.1 / Max: 109.41

Device: CPU - Batch Size: 256 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Ubuntu 22.1051015202521.88

Benchmark: fatigue2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileUbuntu 22.1060120180240300SE +/- 0.04, N = 3256.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileUbuntu 22.1050100150200250Min: 256.63 / Avg: 256.7 / Max: 256.76

Time To Compile

Clear Linux: The test quit with a non-zero exit status.

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.10Clear Linux816243240SE +/- 0.04, N = 3SE +/- 0.05, N = 331.9633.47-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.10Clear Linux714212835Min: 31.89 / Avg: 31.96 / Max: 32.02Min: 33.39 / Avg: 33.47 / Max: 33.571. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.10Clear Linux20406080100SE +/- 0.09, N = 3SE +/- 0.12, N = 379.0075.44-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.10Clear Linux1530456075Min: 78.85 / Avg: 79 / Max: 79.17Min: 75.2 / Avg: 75.44 / Max: 75.611. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.108K16K24K32K40KSE +/- 48.68, N = 3390371. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.107K14K21K28K35KMin: 38940 / Avg: 39037.33 / Max: 390881. (CXX) g++ options: -O3 -lm -ldl

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.10Clear Linux1530456075SE +/- 0.08, N = 3SE +/- 0.12, N = 365.1268.89-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.10Clear Linux1326395265Min: 65 / Avg: 65.12 / Max: 65.27Min: 68.65 / Avg: 68.89 / Max: 69.021. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.10Clear Linux306090120150SE +/- 0.14, N = 3SE +/- 0.19, N = 3116.32109.96-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.10Clear Linux20406080100Min: 116.05 / Avg: 116.32 / Max: 116.53Min: 109.74 / Avg: 109.96 / Max: 110.341. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.10Clear Linux1530456075SE +/- 0.06, N = 3SE +/- 0.05, N = 365.1469.01-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.10Clear Linux1326395265Min: 65.02 / Avg: 65.14 / Max: 65.2Min: 68.93 / Avg: 69.01 / Max: 69.11. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.10Clear Linux306090120150SE +/- 0.10, N = 3SE +/- 0.08, N = 3116.30109.77-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.10Clear Linux20406080100Min: 116.19 / Avg: 116.3 / Max: 116.5Min: 109.62 / Avg: 109.77 / Max: 109.891. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux306090120150SE +/- 0.17, N = 3SE +/- 0.33, N = 3133133-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux20406080100Min: 133 / Avg: 133.17 / Max: 133.5Min: 132 / Avg: 132.67 / Max: 1331. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Ubuntu 22.10369121512.1

Benchmark: tfft2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux30060090012001500SE +/- 0.60, N = 3SE +/- 9.64, N = 312101254-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux2004006008001000Min: 1209 / Avg: 1210.17 / Max: 1211Min: 1235 / Avg: 1254.17 / Max: 1265.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux2004006008001000SE +/- 0.29, N = 3SE +/- 0.44, N = 3687818-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux140280420560700Min: 686 / Avg: 686.5 / Max: 687Min: 817 / Avg: 817.83 / Max: 818.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux2K4K6K8K10KSE +/- 2.84, N = 3SE +/- 21.86, N = 368429079-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardUbuntu 22.10Clear Linux16003200480064008000Min: 6838 / Avg: 6842 / Max: 6847.5Min: 9053.5 / Avg: 9079 / Max: 9122.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPUbuntu 22.10Clear Linux7K14K21K28K35KSE +/- 28.94, N = 3SE +/- 767.97, N = 1530942.0133544.891. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPUbuntu 22.10Clear Linux6K12K18K24K30KMin: 30892.58 / Avg: 30942.01 / Max: 30992.79Min: 29802.43 / Avg: 33544.89 / Max: 36883.431. (CXX) g++ options: -O3 -march=native -fopenmp

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 22.10Clear Linux400800120016002000SE +/- 15.45, N = 9SE +/- 19.33, N = 31902.71597.8MIN: 1741.39 / MAX: 1997.09MIN: 1448.14 / MAX: 1649.56
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 22.10Clear Linux30060090012001500Min: 1857.25 / Avg: 1902.66 / Max: 1965.66Min: 1577.43 / Avg: 1597.83 / Max: 1636.48

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamUbuntu 22.10Clear Linux306090120150SE +/- 0.19, N = 3SE +/- 1.03, N = 3114.66113.52
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamUbuntu 22.10Clear Linux20406080100Min: 114.33 / Avg: 114.66 / Max: 115Min: 111.68 / Avg: 113.52 / Max: 115.23

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 4.33, N = 314731. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500Min: 1466 / Avg: 1473.33 / Max: 14811. (CXX) g++ options: -O3 -lm -ldl

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.10Clear Linux20406080100SE +/- 0.07, N = 3SE +/- 0.01, N = 376.3277.85-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.10Clear Linux1530456075Min: 76.21 / Avg: 76.32 / Max: 76.44Min: 77.83 / Avg: 77.85 / Max: 77.861. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.10Clear Linux20406080100SE +/- 0.09, N = 3SE +/- 0.01, N = 399.2597.31-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.10Clear Linux20406080100Min: 99.1 / Avg: 99.25 / Max: 99.39Min: 97.29 / Avg: 97.31 / Max: 97.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationUbuntu 22.10Clear Linux20406080100SE +/- 0.31, N = 3SE +/- 0.58, N = 3108.62108.00
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationUbuntu 22.10Clear Linux20406080100Min: 108.03 / Avg: 108.62 / Max: 109.09Min: 106.84 / Avg: 108 / Max: 108.6

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.10Clear Linux20406080100SE +/- 0.03, N = 3SE +/- 0.10, N = 376.2978.12-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.10Clear Linux1530456075Min: 76.23 / Avg: 76.29 / Max: 76.34Min: 77.96 / Avg: 78.12 / Max: 78.31. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.10Clear Linux20406080100SE +/- 0.04, N = 3SE +/- 0.12, N = 399.2996.96-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.10Clear Linux20406080100Min: 99.23 / Avg: 99.29 / Max: 99.37Min: 96.74 / Avg: 96.96 / Max: 97.161. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyUbuntu 22.10Clear Linux4080120160200164.94162.83

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 2.73, N = 312421. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.102004006008001000Min: 1238 / Avg: 1241.67 / Max: 12471. (CXX) g++ options: -O3 -lm -ldl

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.08, N = 3264.59
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetUbuntu 22.1050100150200250Min: 264.51 / Avg: 264.59 / Max: 264.75

Device: CPU - Batch Size: 512 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MUbuntu 22.102K4K6K8K10KSE +/- 65.64, N = 39652.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MUbuntu 22.102K4K6K8K10KMin: 9521.3 / Avg: 9652.53 / Max: 9721.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Variant: Monero - Hash Count: 1M

Clear Linux: The test quit with a non-zero exit status. E: xmrig: line 3: ./xmrig: No such file or directory

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Ubuntu 22.1061218243025.83

Benchmark: gas_dyn2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KUbuntu 22.10Clear Linux246810SE +/- 0.02, N = 3SE +/- 0.03, N = 35.796.10-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KUbuntu 22.10Clear Linux246810Min: 5.77 / Avg: 5.79 / Max: 5.82Min: 6.06 / Avg: 6.1 / Max: 6.151. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 22.10Clear Linux100200300400500SE +/- 6.59, N = 15SE +/- 1.90, N = 3454.3379.6MIN: 344.62 / MAX: 815.12MIN: 316.44 / MAX: 567.07
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 22.10Clear Linux80160240320400Min: 411.11 / Avg: 454.29 / Max: 474.75Min: 377.67 / Avg: 379.57 / Max: 383.37

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 22.10Clear Linux246810SE +/- 0.04, N = 3SE +/- 0.00, N = 37.395.04
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 22.10Clear Linux3691215Min: 7.35 / Avg: 7.39 / Max: 7.47Min: 5.03 / Avg: 5.04 / Max: 5.04

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingUbuntu 22.10Clear Linux4M8M12M16M20MSE +/- 181309.12, N = 4SE +/- 234990.96, N = 1514703175.3217592139.66-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingUbuntu 22.10Clear Linux3M6M9M12M15MMin: 14270832.24 / Avg: 14703175.32 / Max: 15008358.62Min: 17013224.04 / Avg: 17592139.66 / Max: 20690520.641. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsUbuntu 22.10Clear Linux900K1800K2700K3600K4500KSE +/- 40661.15, N = 15SE +/- 51987.30, N = 44307014.944366568.31-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsUbuntu 22.10Clear Linux800K1600K2400K3200K4000KMin: 4054364.88 / Avg: 4307014.94 / Max: 4459899.4Min: 4312187.45 / Avg: 4366568.31 / Max: 4522503.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CUbuntu 22.10Clear Linux3K6K9K12K15KSE +/- 32.81, N = 3SE +/- 41.18, N = 315473.9015318.83-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CUbuntu 22.10Clear Linux3K6K9K12K15KMin: 15410.87 / Avg: 15473.9 / Max: 15521.22Min: 15262.23 / Avg: 15318.83 / Max: 15398.951. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.05, N = 337.68
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Ubuntu 22.10816243240Min: 37.58 / Avg: 37.68 / Max: 37.77

Device: CPU - Batch Size: 64 - Model: ResNet-50

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Ubuntu 22.1040K80K120K160K200KSE +/- 624.40, N = 3192021.951. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Ubuntu 22.1030K60K90K120K150KMin: 191184.41 / Avg: 192021.95 / Max: 193242.921. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 1000

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 22.1040K80K120K160K200KSE +/- 492.12, N = 3203069.781. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 22.1040K80K120K160K200KMin: 202459.96 / Avg: 203069.78 / Max: 204043.751. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 500

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Ubuntu 22.1040K80K120K160K200KSE +/- 636.13, N = 3205841.241. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Ubuntu 22.1040K80K120K160K200KMin: 205200.2 / Avg: 205841.24 / Max: 207113.51. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 200

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Ubuntu 22.1040K80K120K160K200KSE +/- 1164.13, N = 3204910.461. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Ubuntu 22.1040K80K120K160K200KMin: 203534.3 / Avg: 204910.46 / Max: 207224.961. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Connections: 100

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SUbuntu 22.10Clear Linux160K320K480K640K800KSE +/- 9180.18, N = 3SE +/- 13473.99, N = 15765587706620-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SUbuntu 22.10Clear Linux130K260K390K520K650KMin: 754990 / Avg: 765586.67 / Max: 783870Min: 663220 / Avg: 706620 / Max: 7852901. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexUbuntu 22.10Clear Linux4M8M12M16M20MSE +/- 110264.57, N = 15SE +/- 180807.15, N = 316748823.8817946052.04-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexUbuntu 22.10Clear Linux3M6M9M12M15MMin: 16548809.98 / Avg: 16748823.88 / Max: 18111916.34Min: 17594949.43 / Avg: 17946052.04 / Max: 18196562.611. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoUbuntu 22.10Clear Linux11K22K33K44K55KSE +/- 294.46, N = 15SE +/- 228.79, N = 342378.7951626.59-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoUbuntu 22.10Clear Linux9K18K27K36K45KMin: 41902.77 / Avg: 42378.79 / Max: 46409.36Min: 51180.93 / Avg: 51626.59 / Max: 51939.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPUbuntu 22.10Clear Linux5K10K15K20K25KSE +/- 17.57, N = 3SE +/- 825.46, N = 1519315.8322344.261. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPUbuntu 22.10Clear Linux4K8K12K16K20KMin: 19280.92 / Avg: 19315.83 / Max: 19336.74Min: 18546.49 / Avg: 22344.26 / Max: 25633.751. (CXX) g++ options: -O3 -march=native -fopenmp

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDUbuntu 22.10Clear Linux20406080100SE +/- 0.14, N = 3SE +/- 0.22, N = 384.6581.321. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDUbuntu 22.10Clear Linux1632486480Min: 84.37 / Avg: 84.65 / Max: 84.83Min: 80.93 / Avg: 81.32 / Max: 81.681. (CXX) g++ options: -O2 -lOpenCL

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Ubuntu 22.10Clear Linux48121620SE +/- 0.02, N = 3SE +/- 0.05, N = 313.2516.06-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Ubuntu 22.10Clear Linux48121620Min: 13.22 / Avg: 13.25 / Max: 13.29Min: 15.97 / Avg: 16.06 / Max: 16.111. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Ubuntu 22.10Clear Linux48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 313.5816.31-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Ubuntu 22.10Clear Linux48121620Min: 13.56 / Avg: 13.58 / Max: 13.59Min: 16.28 / Avg: 16.31 / Max: 16.351. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Ubuntu 22.10Clear Linux2004006008001000SE +/- 1.51, N = 3SE +/- 1.53, N = 3965.8953.3
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Ubuntu 22.10Clear Linux2004006008001000Min: 964 / Avg: 965.8 / Max: 968.8Min: 950.8 / Avg: 953.33 / Max: 956.1

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux246810SE +/- 0.10591, N = 15SE +/- 0.05951, N = 87.631326.80836MIN: 2.84-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 2.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux3691215Min: 7.05 / Avg: 7.63 / Max: 8.32Min: 6.53 / Avg: 6.81 / Max: 6.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 3840 x 2160Ubuntu 22.102004006008001000SE +/- 5.47, N = 3951.3

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinUbuntu 22.10Clear Linux30K60K90K120K150KSE +/- 1874.06, N = 4SE +/- 1148.26, N = 12156343160181-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinUbuntu 22.10Clear Linux30K60K90K120K150KMin: 150850 / Avg: 156342.5 / Max: 159140Min: 150350 / Avg: 160180.83 / Max: 1625001. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 3.09, N = 3967.89
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000Min: 962.31 / Avg: 967.89 / Max: 972.97

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.04, N = 312.05
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.1048121620Min: 11.99 / Avg: 12.05 / Max: 12.12

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 6.80, N = 3964.44
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000Min: 951.99 / Avg: 964.44 / Max: 975.4

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.14, N = 312.19
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.1048121620Min: 11.99 / Avg: 12.19 / Max: 12.47

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux5001000150020002500SE +/- 21.59, N = 3SE +/- 13.49, N = 32150.832065.81MIN: 1989.04-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 1923.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux400800120016002000Min: 2111.44 / Avg: 2150.83 / Max: 2185.86Min: 2047.13 / Avg: 2065.81 / Max: 2092.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyUbuntu 22.10Clear Linux20406080100SE +/- 0.11, N = 3SE +/- 0.06, N = 375.3275.33
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyUbuntu 22.10Clear Linux1428425670Min: 75.14 / Avg: 75.32 / Max: 75.51Min: 75.25 / Avg: 75.33 / Max: 75.44

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.46, N = 3227.14
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200Min: 226.41 / Avg: 227.14 / Max: 228

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.101224364860SE +/- 0.07, N = 352.54
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.101122334455Min: 52.42 / Avg: 52.54 / Max: 52.68

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux2004006008001000SE +/- 1.98, N = 3SE +/- 15.77, N = 31112.821132.54MIN: 1021.3-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 1006.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux2004006008001000Min: 1110.74 / Avg: 1112.82 / Max: 1116.78Min: 1101.6 / Avg: 1132.54 / Max: 1153.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1010002000300040005000SE +/- 6.89, N = 348311. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.108001600240032004000Min: 4821 / Avg: 4830.67 / Max: 48441. (CXX) g++ options: -O3 -lm -ldl

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer

Clear Linux: The test quit with a non-zero exit status. E: ospray-studio: line 5: ospStudio: command not found

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUUbuntu 22.10Clear Linux6K12K18K24K30KSE +/- 190.70, N = 3SE +/- 103.05, N = 32873428587
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUUbuntu 22.10Clear Linux5K10K15K20K25KMin: 28501 / Avg: 28734 / Max: 29112Min: 28401 / Avg: 28586.67 / Max: 28757

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Ubuntu 22.10Clear Linux48121620SE +/- 0.01, N = 3SE +/- 0.05, N = 313.0915.92-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Ubuntu 22.10Clear Linux48121620Min: 13.06 / Avg: 13.09 / Max: 13.1Min: 15.82 / Avg: 15.92 / Max: 15.981. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Ubuntu 22.10Clear Linux48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 313.4316.18-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Ubuntu 22.10Clear Linux48121620Min: 13.41 / Avg: 13.43 / Max: 13.45Min: 16.16 / Avg: 16.18 / Max: 16.21. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.105001000150020002500SE +/- 6.08, N = 32222.92MIN: 1682.82 / MAX: 2975.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.100.80551.6112.41653.2224.0275SE +/- 0.01, N = 33.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.105001000150020002500SE +/- 5.80, N = 32238.02MIN: 1692.38 / MAX: 2991.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.100.79881.59762.39643.19523.994SE +/- 0.02, N = 33.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.1030060090012001500SE +/- 3.93, N = 31570.46MIN: 1396.06 / MAX: 1856.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.101.1432.2863.4294.5725.715SE +/- 0.01, N = 35.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltimateUbuntu 22.10110220330440550SE +/- 1.29, N = 3527.86MIN: 98 / MAX: 1077

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSUbuntu 22.10Clear Linux400800120016002000SE +/- 6.70, N = 3SE +/- 15.22, N = 32026.41885.4MIN: 1949.41 / MAX: 2109.84MIN: 1818.73 / MAX: 2024.91
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSUbuntu 22.10Clear Linux400800120016002000Min: 2014.4 / Avg: 2026.4 / Max: 2037.58Min: 1855.69 / Avg: 1885.37 / Max: 1906.04

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomUbuntu 22.10Clear Linux1.19882.39763.59644.79525.994SE +/- 0.040, N = 3SE +/- 0.057, N = 34.0515.328
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomUbuntu 22.10Clear Linux246810Min: 3.99 / Avg: 4.05 / Max: 4.12Min: 5.26 / Avg: 5.33 / Max: 5.44

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltimateUbuntu 22.10120240360480600SE +/- 0.59, N = 3540.60MIN: 101 / MAX: 1094

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 22.10Clear Linux400800120016002000SE +/- 22.11, N = 3SE +/- 14.31, N = 131992.61777.5MIN: 1796.31 / MAX: 2233.14MIN: 1478.87 / MAX: 2208.28
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 22.10Clear Linux30060090012001500Min: 1950.53 / Avg: 1992.64 / Max: 2025.42Min: 1610.23 / Avg: 1777.51 / Max: 1807.2

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterUbuntu 22.10Clear Linux2040608010090.7895.66

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MUbuntu 22.104K8K12K16K20KSE +/- 35.72, N = 316463.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MUbuntu 22.103K6K9K12K15KMin: 16394 / Avg: 16463.23 / Max: 16513.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Variant: Wownero - Hash Count: 1M

Clear Linux: The test quit with a non-zero exit status. E: xmrig: line 3: ./xmrig: No such file or directory

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1090180270360450SE +/- 0.21, N = 3438.23MIN: 270.29 / MAX: 1085.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 318.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.10306090120150SE +/- 0.17, N = 3125.97MIN: 91.09 / MAX: 325.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.101428425670SE +/- 0.09, N = 363.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.103691215SE +/- 0.03, N = 310.96MIN: 7.62 / MAX: 52.61. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.10160320480640800SE +/- 2.34, N = 3728.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.101224364860SE +/- 0.05, N = 351.14MIN: 22.47 / MAX: 182.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.10100200300400500SE +/- 0.50, N = 3468.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.10246810SE +/- 0.01, N = 38.72MIN: 5.93 / MAX: 54.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.102004006008001000SE +/- 0.85, N = 3916.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.10510152025SE +/- 0.11, N = 321.43MIN: 12.24 / MAX: 94.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.1080160240320400SE +/- 1.93, N = 3372.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 314.63MIN: 6.68 / MAX: 120.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.10400800120016002000SE +/- 1.15, N = 31638.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.100.1620.3240.4860.6480.81SE +/- 0.00, N = 30.72MIN: 0.42 / MAX: 4.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.107K14K21K28K35KSE +/- 40.02, N = 333018.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.100.3690.7381.1071.4761.845SE +/- 0.00, N = 31.64MIN: 0.87 / MAX: 9.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.103K6K9K12K15KSE +/- 10.91, N = 314593.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathUbuntu 22.10Clear Linux30K60K90K120K150KSE +/- 966.54, N = 9SE +/- 111.06, N = 3119832.03118329.56-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathUbuntu 22.10Clear Linux20K40K60K80K100KMin: 117846.53 / Avg: 119832.03 / Max: 127260.1Min: 118115.14 / Avg: 118329.56 / Max: 1184871. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.10Clear Linux80K160K240K320K400KSE +/- 142.92, N = 3SE +/- 239.50, N = 3358806.7358535.2-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.10Clear Linux60K120K180K240K300KMin: 358540.4 / Avg: 358806.7 / Max: 359029.8Min: 358267.9 / Avg: 358535.23 / Max: 359013.11. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.10Clear Linux12002400360048006000SE +/- 5.03, N = 3SE +/- 9.58, N = 35496.95428.9-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.10Clear Linux10002000300040005000Min: 5489.8 / Avg: 5496.87 / Max: 5506.6Min: 5417.4 / Avg: 5428.87 / Max: 5447.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CUbuntu 22.10Clear Linux11K22K33K44K55KSE +/- 33.54, N = 3SE +/- 575.82, N = 349771.2948313.69-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CUbuntu 22.10Clear Linux9K18K27K36K45KMin: 49707.27 / Avg: 49771.29 / Max: 49820.63Min: 47174.14 / Avg: 48313.69 / Max: 49027.641. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedUbuntu 22.10Clear Linux4080120160200171.50177.80

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CUbuntu 22.10Clear Linux5K10K15K20K25KSE +/- 373.18, N = 15SE +/- 264.38, N = 322382.8121228.75-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CUbuntu 22.10Clear Linux4K8K12K16K20KMin: 21381.59 / Avg: 22382.81 / Max: 26281.46Min: 20809.7 / Avg: 21228.75 / Max: 21717.551. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialUbuntu 22.10Clear Linux2040608010084.8683.82

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingUbuntu 22.10Clear Linux30K60K90K120K150KSE +/- 1890.40, N = 3SE +/- 1174.47, N = 111399811319941. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingUbuntu 22.10Clear Linux20K40K60K80K100KMin: 137335 / Avg: 139981.33 / Max: 143643Min: 128472 / Avg: 131993.73 / Max: 1399521. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingUbuntu 22.10Clear Linux40K80K120K160K200KSE +/- 1518.09, N = 3SE +/- 1314.02, N = 111821531818051. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingUbuntu 22.10Clear Linux30K60K90K120K150KMin: 179141 / Avg: 182152.67 / Max: 183992Min: 173393 / Avg: 181804.64 / Max: 1856911. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.32, N = 3256.55
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetUbuntu 22.1050100150200250Min: 256.1 / Avg: 256.55 / Max: 257.18

Device: CPU - Batch Size: 256 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 22.10Clear Linux0.69051.3812.07152.7623.4525SE +/- 0.026, N = 3SE +/- 0.009, N = 33.0043.069-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 22.10Clear Linux246810Min: 2.97 / Avg: 3 / Max: 3.05Min: 3.05 / Avg: 3.07 / Max: 3.081. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Ubuntu 22.10Clear Linux0.12830.25660.38490.51320.6415SE +/- 0.00, N = 3SE +/- 0.00, N = 30.570.55
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Ubuntu 22.10Clear Linux246810Min: 0.56 / Avg: 0.57 / Max: 0.57Min: 0.55 / Avg: 0.55 / Max: 0.56

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux0.44760.89521.34281.79042.238SE +/- 0.01885, N = 15SE +/- 0.01965, N = 61.904301.98941MIN: 1.57-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 1.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux246810Min: 1.69 / Avg: 1.9 / Max: 2.01Min: 1.89 / Avg: 1.99 / Max: 2.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 22.10Clear Linux9001800270036004500SE +/- 48.71, N = 3SE +/- 16.40, N = 34193.83631.0MIN: 4126.1 / MAX: 6173.56MIN: 3609.72 / MAX: 5141.58
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 22.10Clear Linux7001400210028003500Min: 4126.1 / Avg: 4193.81 / Max: 4288.31Min: 3609.72 / Avg: 3630.96 / Max: 3663.23

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyUbuntu 22.10Clear Linux1224364860SE +/- 0.16, N = 3SE +/- 0.31, N = 351.6151.58
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyUbuntu 22.10Clear Linux1020304050Min: 51.39 / Avg: 51.61 / Max: 51.91Min: 51.03 / Avg: 51.58 / Max: 52.11

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 2.13, N = 3692.06MIN: 411 / MAX: 1142

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 1.76, N = 3696.93MIN: 375 / MAX: 1188

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteUbuntu 22.10Clear Linux1122334455SE +/- 0.10, N = 3SE +/- 0.10, N = 349.9848.371. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteUbuntu 22.10Clear Linux1020304050Min: 49.86 / Avg: 49.98 / Max: 50.17Min: 48.21 / Avg: 48.37 / Max: 48.561. (CXX) g++ options: -O2 -lOpenCL

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Ubuntu 22.1071421283529.3

Benchmark: channel2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 22.10Clear Linux48121620SE +/- 0.00, N = 3SE +/- 0.00, N = 316.5816.631. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 22.10Clear Linux48121620Min: 16.58 / Avg: 16.58 / Max: 16.59Min: 16.63 / Avg: 16.63 / Max: 16.631. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 22.10Clear Linux90180270360450SE +/- 0.12, N = 3SE +/- 0.04, N = 3414.59415.681. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 22.10Clear Linux70140210280350Min: 414.45 / Avg: 414.59 / Max: 414.82Min: 415.64 / Avg: 415.68 / Max: 415.751. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.08, N = 338.51
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Ubuntu 22.10816243240Min: 38.38 / Avg: 38.51 / Max: 38.65

Device: CPU - Batch Size: 32 - Model: ResNet-50

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DUbuntu 22.10Clear Linux7001400210028003500SE +/- 10.67, N = 3SE +/- 4.83, N = 33049.532906.94-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DUbuntu 22.10Clear Linux5001000150020002500Min: 3037.56 / Avg: 3049.53 / Max: 3070.81Min: 2897.75 / Avg: 2906.94 / Max: 2914.131. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.71, N = 3236.60
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200Min: 235.72 / Avg: 236.6 / Max: 238.02

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.101122334455SE +/- 0.20, N = 350.40
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.101020304050Min: 50.05 / Avg: 50.4 / Max: 50.74

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.05
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100Min: 89.73 / Avg: 90.05 / Max: 90.45

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.10
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.103691215Min: 11.06 / Avg: 11.1 / Max: 11.14

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.48
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100Min: 90.25 / Avg: 90.48 / Max: 90.91

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.05
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.103691215Min: 11 / Avg: 11.05 / Max: 11.08

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressUbuntu 22.10Clear Linux11K22K33K44K55KSE +/- 538.64, N = 3SE +/- 506.19, N = 651634.5451871.91-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressUbuntu 22.10Clear Linux9K18K27K36K45KMin: 50935.47 / Avg: 51634.54 / Max: 52693.92Min: 50470.92 / Avg: 51871.91 / Max: 54115.621. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10612182430SE +/- 0.09, N = 324.80
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10612182430Min: 24.64 / Avg: 24.8 / Max: 24.94

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10918273645SE +/- 0.14, N = 340.32
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10816243240Min: 40.1 / Avg: 40.32 / Max: 40.58

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.26, N = 3115.35
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100Min: 114.85 / Avg: 115.35 / Max: 115.7

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 3103.90
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100Min: 103.65 / Avg: 103.9 / Max: 104.36

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkUbuntu 22.10Clear Linux612182430SE +/- 0.17, N = 3SE +/- 0.04, N = 326.3326.90
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkUbuntu 22.10Clear Linux612182430Min: 26.03 / Avg: 26.33 / Max: 26.61Min: 26.84 / Avg: 26.9 / Max: 26.99

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsUbuntu 22.10Clear Linux0.13980.27960.41940.55920.699SE +/- 0.00121, N = 3SE +/- 0.00153, N = 30.609820.62153
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsUbuntu 22.10Clear Linux246810Min: 0.61 / Avg: 0.61 / Max: 0.61Min: 0.62 / Avg: 0.62 / Max: 0.62

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 22.10Clear Linux11002200330044005500SE +/- 0.18, N = 3SE +/- 2.86, N = 34758.35127.11. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 22.10Clear Linux9001800270036004500Min: 4758 / Avg: 4758.33 / Max: 4758.6Min: 5122.3 / Avg: 5127.13 / Max: 5132.21. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 22.10Clear Linux20406080100SE +/- 1.02, N = 3SE +/- 0.52, N = 380.585.51. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 22.10Clear Linux1632486480Min: 79.1 / Avg: 80.53 / Max: 82.5Min: 84.5 / Avg: 85.53 / Max: 86.11. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designUbuntu 22.1061218243025.77

Benchmark: mp_prop_design

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigUbuntu 22.10Clear Linux1020304050SE +/- 0.39, N = 3SE +/- 0.40, N = 341.4042.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigUbuntu 22.10Clear Linux918273645Min: 40.95 / Avg: 41.4 / Max: 42.18Min: 42.1 / Avg: 42.7 / Max: 43.45

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DUbuntu 22.10Clear Linux30060090012001500SE +/- 15.74, N = 4SE +/- 16.11, N = 41263.161304.96-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DUbuntu 22.10Clear Linux2004006008001000Min: 1233.47 / Avg: 1263.16 / Max: 1306.15Min: 1269.67 / Avg: 1304.96 / Max: 1345.221. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200SE +/- 0.49, N = 3159.94
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150Min: 159.07 / Avg: 159.94 / Max: 160.76

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 374.69
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.101428425670Min: 74.31 / Avg: 74.69 / Max: 75.11

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.23, N = 331.07
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835Min: 30.64 / Avg: 31.07 / Max: 31.44

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.24, N = 332.18
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835Min: 31.8 / Avg: 32.18 / Max: 32.63

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Ubuntu 22.101.08412.16823.25234.33645.4205SE +/- 0.002423, N = 34.8184251. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Ubuntu 22.10246810Min: 4.82 / Avg: 4.82 / Max: 4.821. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sample Rate: 96000 - Buffer Size: 512

Clear Linux: The test quit with a non-zero exit status. E: stargate: line 40: ./engine/stargate-engine: No such file or directory

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Ubuntu 22.101.09232.18463.27694.36925.4615SE +/- 0.001939, N = 34.8546951. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Ubuntu 22.10246810Min: 4.85 / Avg: 4.85 / Max: 4.861. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sample Rate: 96000 - Buffer Size: 1024

Clear Linux: The test quit with a non-zero exit status. E: stargate: line 40: ./engine/stargate-engine: No such file or directory

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CUbuntu 22.10Clear Linux11K22K33K44K55KSE +/- 218.90, N = 3SE +/- 537.24, N = 353311.8951989.89-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CUbuntu 22.10Clear Linux9K18K27K36K45KMin: 53060.04 / Avg: 53311.89 / Max: 53747.94Min: 51100.05 / Avg: 51989.89 / Max: 52956.381. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.10Clear Linux4080120160200SE +/- 0.41, N = 3SE +/- 0.25, N = 3182.08193.28-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.10Clear Linux4080120160200Min: 181.4 / Avg: 182.08 / Max: 182.81Min: 192.89 / Avg: 193.28 / Max: 193.731. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.10Clear Linux714212835SE +/- 0.06, N = 3SE +/- 0.03, N = 327.7426.13-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.10Clear Linux612182430Min: 27.62 / Avg: 27.74 / Max: 27.84Min: 26.07 / Avg: 26.13 / Max: 26.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 312.49
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.1048121620Min: 12.48 / Avg: 12.49 / Max: 12.51

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 380.03
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.101530456075Min: 79.94 / Avg: 80.03 / Max: 80.12

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 22.10Clear Linux11002200330044005500SE +/- 9.98, N = 3SE +/- 2.45, N = 34887.75235.91. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 22.10Clear Linux9001800270036004500Min: 4869.3 / Avg: 4887.67 / Max: 4903.6Min: 5232.1 / Avg: 5235.93 / Max: 5240.51. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 22.10Clear Linux1326395265SE +/- 0.40, N = 3SE +/- 0.10, N = 350.956.41. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 22.10Clear Linux1122334455Min: 50.5 / Avg: 50.9 / Max: 51.7Min: 56.3 / Avg: 56.4 / Max: 56.61. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 378.24
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.101530456075Min: 78.14 / Avg: 78.24 / Max: 78.31

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.21, N = 3153.13
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150Min: 152.82 / Avg: 153.13 / Max: 153.53

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.1048121620SE +/- 0.05, N = 317.59
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.1048121620Min: 17.52 / Avg: 17.59 / Max: 17.68

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.101326395265SE +/- 0.15, N = 356.83
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.101122334455Min: 56.53 / Avg: 56.83 / Max: 57.05

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 310.35
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.103691215Min: 10.33 / Avg: 10.35 / Max: 10.38

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.14, N = 396.59
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100Min: 96.31 / Avg: 96.59 / Max: 96.74

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

Clear Linux: The test quit with a non-zero exit status. E: deepsparse: line 2: /.local/bin/deepsparse.benchmark: No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux0.33590.67181.00771.34361.6795SE +/- 0.090916, N = 15SE +/- 0.015557, N = 31.4928361.084460MIN: 0.85-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 0.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux246810Min: 0.99 / Avg: 1.49 / Max: 2.01Min: 1.06 / Avg: 1.08 / Max: 1.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeUbuntu 22.100.44780.89561.34341.79122.239SE +/- 0.01, N = 31.99
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeUbuntu 22.10246810Min: 1.97 / Avg: 1.99 / Max: 2.01

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeUbuntu 22.100.17330.34660.51990.69320.8665SE +/- 0.02, N = 30.77
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeUbuntu 22.10246810Min: 0.72 / Avg: 0.77 / Max: 0.8

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeUbuntu 22.100.20930.41860.62790.83721.0465SE +/- 0.03, N = 30.93
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeUbuntu 22.10246810Min: 0.86 / Avg: 0.93 / Max: 0.97

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeUbuntu 22.100.22950.4590.68850.9181.1475SE +/- 0.01, N = 31.02
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeUbuntu 22.10246810Min: 1.01 / Avg: 1.02 / Max: 1.04

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.77631.55262.32893.10523.8815SE +/- 0.03, N = 33.45
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeUbuntu 22.10246810Min: 3.41 / Avg: 3.45 / Max: 3.5

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeUbuntu 22.100.59181.18361.77542.36722.959SE +/- 0.02, N = 32.63
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeUbuntu 22.10246810Min: 2.6 / Avg: 2.63 / Max: 2.68

Row Count: 1000000 - Partitions: 100 - Group By Test Time

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.18, N = 351.76
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkUbuntu 22.101020304050Min: 51.42 / Avg: 51.76 / Max: 52.01

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

Clear Linux: The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Ubuntu 22.1050K100K150K200K250KSE +/- 133.33, N = 32529331. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Ubuntu 22.1040K80K120K160K200KMin: 252800 / Avg: 252933.33 / Max: 2532001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Test: Square Plain C++

Clear Linux: The test quit with a non-zero exit status. E: chia-vdf: line 3: ./src/vdf_bench: No such file or directory

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingUbuntu 22.10Clear Linux2K4K6K8K10KSE +/- 10.70, N = 3SE +/- 105.79, N = 47385.399212.06-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingUbuntu 22.10Clear Linux16003200480064008000Min: 7373.14 / Avg: 7385.39 / Max: 7406.71Min: 9064.19 / Avg: 9212.06 / Max: 9524.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedUbuntu 22.1060K120K180K240K300KSE +/- 218.58, N = 32690671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedUbuntu 22.1050K100K150K200K250KMin: 268800 / Avg: 269066.67 / Max: 2695001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Test: Square Assembly Optimized

Clear Linux: The test quit with a non-zero exit status. E: chia-vdf: line 3: ./src/vdf_bench: No such file or directory

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.22, N = 3111.20
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetUbuntu 22.1020406080100Min: 110.79 / Avg: 111.2 / Max: 111.55

Device: CPU - Batch Size: 64 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeUbuntu 22.10Clear Linux816243240SE +/- 0.11, N = 3SE +/- 0.09, N = 332.3531.121. Ubuntu 22.10: RawTherapee, version 5.8, command line.2. Clear Linux: RawTherapee, version , command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeUbuntu 22.10Clear Linux714212835Min: 32.23 / Avg: 32.35 / Max: 32.57Min: 31.02 / Avg: 31.12 / Max: 31.31. Ubuntu 22.10: RawTherapee, version 5.8, command line.2. Clear Linux: RawTherapee, version , command line.

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateUbuntu 22.10Clear Linux510152025SE +/- 0.06, N = 3SE +/- 0.03, N = 322.217.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateUbuntu 22.10Clear Linux510152025Min: 22.1 / Avg: 22.2 / Max: 22.3Min: 17.2 / Avg: 17.27 / Max: 17.3

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1080Ubuntu 22.10Clear Linux2004006008001000SE +/- 0.56, N = 3SE +/- 0.97, N = 3999.44998.32
OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1080Ubuntu 22.10Clear Linux2004006008001000Min: 998.33 / Avg: 999.44 / Max: 1000Min: 996.64 / Avg: 998.32 / Max: 1000

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingUbuntu 22.10Clear Linux0.00090.00180.00270.00360.0045SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0040.004
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingUbuntu 22.10Clear Linux12345Min: 0 / Avg: 0 / Max: 0.01Min: 0 / Avg: 0 / Max: 0

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 2160Ubuntu 22.10Clear Linux2004006008001000SE +/- 3.57, N = 3SE +/- 3.78, N = 3896.02909.87
OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 2160Ubuntu 22.10Clear Linux160320480640800Min: 891.3 / Avg: 896.02 / Max: 903.01Min: 904.68 / Avg: 909.87 / Max: 917.22

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Ubuntu 22.10Clear Linux816243240SE +/- 0.03, N = 3SE +/- 0.02, N = 332.8129.76-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Ubuntu 22.10Clear Linux714212835Min: 32.77 / Avg: 32.81 / Max: 32.87Min: 29.73 / Avg: 29.76 / Max: 29.81. (CC) gcc options: -lz

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandUbuntu 22.10Clear Linux20K40K60K80K100KSE +/- 9.58, N = 3SE +/- 4.13, N = 382767.7682742.79-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandUbuntu 22.10Clear Linux14K28K42K56K70KMin: 82753.9 / Avg: 82767.76 / Max: 82786.15Min: 82737.28 / Avg: 82742.79 / Max: 82750.871. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Ubuntu 22.104812162013.99

Benchmark: test_fpu2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingUbuntu 22.10Clear Linux8M16M24M32M40MSE +/- 181986.71, N = 3SE +/- 11801.23, N = 313432520.5138506364.42-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingUbuntu 22.10Clear Linux7M14M21M28M35MMin: 13247041.77 / Avg: 13432520.51 / Max: 13796471.44Min: 38484180.99 / Avg: 38506364.42 / Max: 38524436.731. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xUbuntu 22.10Clear Linux2004006008001000SE +/- 4.19, N = 3SE +/- 0.85, N = 31128.001138.85-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xUbuntu 22.10Clear Linux2004006008001000Min: 1121.76 / Avg: 1128 / Max: 1135.96Min: 1137.48 / Avg: 1138.85 / Max: 1140.411. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinUbuntu 22.10Clear Linux7001400210028003500SE +/- 37.34, N = 3SE +/- 41.78, N = 33496.703275.84-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinUbuntu 22.10Clear Linux6001200180024003000Min: 3437.06 / Avg: 3496.7 / Max: 3565.44Min: 3234.02 / Avg: 3275.84 / Max: 3359.411. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingUbuntu 22.10Clear Linux20K40K60K80K100KSE +/- 721.90, N = 3SE +/- 1444.59, N = 3113514.43104947.27-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingUbuntu 22.10Clear Linux20K40K60K80K100KMin: 112319.45 / Avg: 113514.43 / Max: 114813.67Min: 103307.95 / Avg: 104947.27 / Max: 107827.261. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptUbuntu 22.10Clear Linux70140210280350SE +/- 3.47, N = 3SE +/- 0.53, N = 3333.58337.82-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptUbuntu 22.10Clear Linux60120180240300Min: 328.57 / Avg: 333.58 / Max: 340.24Min: 336.83 / Avg: 337.82 / Max: 338.631. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPUbuntu 22.10Clear Linux2004006008001000SE +/- 1.45, N = 3SE +/- 2.00, N = 3742.41798.15-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPUbuntu 22.10Clear Linux140280420560700Min: 739.64 / Avg: 742.41 / Max: 744.51Min: 794.23 / Avg: 798.15 / Max: 800.781. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingUbuntu 22.10Clear Linux100200300400500SE +/- 0.68, N = 3SE +/- 2.71, N = 3411.39457.79-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingUbuntu 22.10Clear Linux80160240320400Min: 410.43 / Avg: 411.39 / Max: 412.7Min: 454.7 / Avg: 457.79 / Max: 463.191. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringUbuntu 22.10Clear Linux6K12K18K24K30KSE +/- 56.03, N = 3SE +/- 26.25, N = 327676.3327808.53-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringUbuntu 22.10Clear Linux5K10K15K20K25KMin: 27597.48 / Avg: 27676.33 / Max: 27784.71Min: 27781.76 / Avg: 27808.53 / Max: 27861.031. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAUbuntu 22.10Clear Linux150300450600750SE +/- 1.84, N = 3SE +/- 2.32, N = 3681.80706.16-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAUbuntu 22.10Clear Linux120240360480600Min: 679.38 / Avg: 681.8 / Max: 685.4Min: 702.46 / Avg: 706.16 / Max: 710.431. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocUbuntu 22.10Clear Linux10M20M30M40M50MSE +/- 149024.22, N = 3SE +/- 102140.32, N = 336241645.5947064332.51-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocUbuntu 22.10Clear Linux8M16M24M32M40MMin: 35957154 / Avg: 36241645.59 / Max: 36460852.78Min: 46893674.74 / Avg: 47064332.51 / Max: 47246897.611. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEUbuntu 22.10Clear Linux130K260K390K520K650KSE +/- 3074.02, N = 3SE +/- 8502.15, N = 3588014.94595183.17-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEUbuntu 22.10Clear Linux100K200K300K400K500KMin: 583907.59 / Avg: 588014.94 / Max: 594030.43Min: 583000.57 / Avg: 595183.17 / Max: 611548.131. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDUbuntu 22.10Clear Linux5001000150020002500SE +/- 18.36, N = 3SE +/- 0.73, N = 32049.372343.51-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDUbuntu 22.10Clear Linux400800120016002000Min: 2014.86 / Avg: 2049.37 / Max: 2077.51Min: 2342.73 / Avg: 2343.51 / Max: 2344.961. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathUbuntu 22.10Clear Linux20K40K60K80K100KSE +/- 588.05, N = 3SE +/- 1098.74, N = 3109789.42110071.38-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathUbuntu 22.10Clear Linux20K40K60K80K100KMin: 108660.63 / Avg: 109789.42 / Max: 110639.8Min: 108607.94 / Avg: 110071.38 / Max: 112222.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresUbuntu 22.10Clear Linux800K1600K2400K3200K4000KSE +/- 1451.53, N = 3SE +/- 280.21, N = 33538392.413426675.99-lapparmor -lsctp-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresUbuntu 22.10Clear Linux600K1200K1800K2400K3000KMin: 3536780.97 / Avg: 3538392.41 / Max: 3541289.37Min: 3426203.65 / Avg: 3426675.99 / Max: 3427173.351. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lz -pthread

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeUbuntu 22.1061218243027.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Input: drivaerFastback, Small Mesh Size - Mesh Time

Clear Linux: The test quit with a non-zero exit status. E: cat: log.simpleFoam: No such file or directory

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeUbuntu 22.10306090120150151.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Input: drivaerFastback, Small Mesh Size - Execution Time

Clear Linux: The test quit with a non-zero exit status. E: cat: log.simpleFoam: No such file or directory

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinUbuntu 22.10Clear Linux4K8K12K16K20KSE +/- 26.46, N = 3SE +/- 96.09, N = 31852018870-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinUbuntu 22.10Clear Linux3K6K9K12K15KMin: 18470 / Avg: 18520 / Max: 18560Min: 18750 / Avg: 18870 / Max: 190601. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiUbuntu 22.10Clear Linux30060090012001500SE +/- 4.42, N = 3SE +/- 7.74, N = 31176.021061.90-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiUbuntu 22.10Clear Linux2004006008001000Min: 1169.91 / Avg: 1176.02 / Max: 1184.6Min: 1052.08 / Avg: 1061.9 / Max: 1077.181. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsUbuntu 22.10Clear Linux12K24K36K48K60KSE +/- 141.89, N = 3SE +/- 40.96, N = 35255053897-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsUbuntu 22.10Clear Linux9K18K27K36K45KMin: 52270 / Avg: 52550 / Max: 52730Min: 53820 / Avg: 53896.67 / Max: 539601. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteUbuntu 22.10Clear Linux40K80K120K160K200KSE +/- 120.14, N = 3SE +/- 170.33, N = 3198680200287-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteUbuntu 22.10Clear Linux30K60K90K120K150KMin: 198440 / Avg: 198680 / Max: 198810Min: 199990 / Avg: 200286.67 / Max: 2005801. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinUbuntu 22.10Clear Linux90K180K270K360K450KSE +/- 3729.11, N = 3SE +/- 486.39, N = 3434460436663-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinUbuntu 22.10Clear Linux80K160K240K320K400KMin: 427330 / Avg: 434460 / Max: 439920Min: 435750 / Avg: 436663.33 / Max: 4374101. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinUbuntu 22.10Clear Linux12002400360048006000SE +/- 21.91, N = 3SE +/- 20.28, N = 35423.945020.19-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinUbuntu 22.10Clear Linux9001800270036004500Min: 5380.23 / Avg: 5423.94 / Max: 5448.35Min: 4986.6 / Avg: 5020.19 / Max: 5056.661. (CXX) g++ options: -lcurl -lz -lpthread -lssl -lcrypto -lgmp

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceUbuntu 22.10Clear Linux50100150200250SE +/- 0.88, N = 3SE +/- 0.00, N = 3208130
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceUbuntu 22.10Clear Linux4080120160200Min: 207 / Avg: 208.33 / Max: 210Min: 130 / Avg: 130 / Max: 130

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Ubuntu 22.10246810SE +/- 0.006537, N = 36.1453531. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Ubuntu 22.10246810Min: 6.13 / Avg: 6.15 / Max: 6.151. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sample Rate: 44100 - Buffer Size: 512

Clear Linux: The test quit with a non-zero exit status. E: stargate: line 40: ./engine/stargate-engine: No such file or directory

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileUbuntu 22.10Clear Linux714212835SE +/- 0.20, N = 3SE +/- 0.32, N = 330.2526.991. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileUbuntu 22.10Clear Linux714212835Min: 29.85 / Avg: 30.24 / Max: 30.53Min: 26.38 / Avg: 26.99 / Max: 27.461. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.104080120160200SE +/- 0.71, N = 3199.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.104080120160200Min: 198.3 / Avg: 199.2 / Max: 200.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.10150300450600750SE +/- 5.48, N = 3677.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.10120240360480600Min: 669.3 / Avg: 677.5 / Max: 687.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Ubuntu 22.10246810SE +/- 0.011125, N = 36.4228351. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Ubuntu 22.103691215Min: 6.4 / Avg: 6.42 / Max: 6.441. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sample Rate: 44100 - Buffer Size: 1024

Clear Linux: The test quit with a non-zero exit status. E: stargate: line 40: ./engine/stargate-engine: No such file or directory

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatUbuntu 22.10Clear Linux1122334455SE +/- 0.10, N = 3SE +/- 0.00, N = 346.732.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatUbuntu 22.10Clear Linux1020304050Min: 46.6 / Avg: 46.7 / Max: 46.9Min: 32.4 / Avg: 32.4 / Max: 32.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosUbuntu 22.10Clear Linux1020304050SE +/- 0.13, N = 3SE +/- 0.06, N = 344.932.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosUbuntu 22.10Clear Linux918273645Min: 44.6 / Avg: 44.87 / Max: 45Min: 32 / Avg: 32.1 / Max: 32.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileUbuntu 22.10Clear Linux20406080100SE +/- 0.30, N = 3SE +/- 0.09, N = 382.163.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileUbuntu 22.10Clear Linux1632486480Min: 81.7 / Avg: 82.13 / Max: 82.7Min: 63.4 / Avg: 63.53 / Max: 63.7

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.10Clear Linux80160240320400SE +/- 0.46, N = 3SE +/- 0.12, N = 3353.24358.59-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.10Clear Linux60120180240300Min: 352.76 / Avg: 353.24 / Max: 354.15Min: 358.42 / Avg: 358.59 / Max: 358.831. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.10Clear Linux48121620SE +/- 0.02, N = 3SE +/- 0.00, N = 314.3014.08-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.10Clear Linux48121620Min: 14.26 / Avg: 14.3 / Max: 14.32Min: 14.07 / Avg: 14.08 / Max: 14.091. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonUbuntu 22.10Clear Linux4080120160200SE +/- 0.67, N = 3SE +/- 0.00, N = 3197135
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonUbuntu 22.10Clear Linux4080120160200Min: 196 / Avg: 196.67 / Max: 198Min: 135 / Avg: 135 / Max: 135

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ubuntu 22.10Clear Linux306090120150SE +/- 0.58, N = 3SE +/- 0.00, N = 3158122
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ubuntu 22.10Clear Linux306090120150Min: 157 / Avg: 158 / Max: 159Min: 122 / Avg: 122 / Max: 122

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionUbuntu 22.10Clear Linux0.21830.43660.65490.87321.0915SE +/- 0.00, N = 3SE +/- 0.00, N = 30.910.97-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionUbuntu 22.10Clear Linux246810Min: 0.91 / Avg: 0.91 / Max: 0.92Min: 0.97 / Avg: 0.97 / Max: 0.971. (CC) gcc options: -fvisibility=hidden -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyUbuntu 22.10Clear Linux1428425670SE +/- 0.32, N = 3SE +/- 0.25, N = 361.441.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyUbuntu 22.10Clear Linux1224364860Min: 60.9 / Avg: 61.43 / Max: 62Min: 41.2 / Avg: 41.5 / Max: 42

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 22.10Clear Linux400800120016002000SE +/- 34.76, N = 20SE +/- 26.90, N = 2020911435
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 22.10Clear Linux400800120016002000Min: 1851 / Avg: 2091.35 / Max: 2505Min: 1181 / Avg: 1435.15 / Max: 1626

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibUbuntu 22.10Clear Linux246810SE +/- 0.02, N = 3SE +/- 0.01, N = 38.777.35
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibUbuntu 22.10Clear Linux3691215Min: 8.73 / Avg: 8.77 / Max: 8.8Min: 7.33 / Avg: 7.35 / Max: 7.36

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goUbuntu 22.10Clear Linux306090120150SE +/- 0.33, N = 3SE +/- 0.03, N = 3112.067.6
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goUbuntu 22.10Clear Linux20406080100Min: 112 / Avg: 112.33 / Max: 113Min: 67.5 / Avg: 67.57 / Max: 67.6

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverUbuntu 22.10Clear Linux246810SE +/- 0.053, N = 8SE +/- 0.052, N = 156.1686.2271. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverUbuntu 22.10Clear Linux246810Min: 6.09 / Avg: 6.17 / Max: 6.53Min: 5.96 / Avg: 6.23 / Max: 6.471. (CXX) g++ options: -O2 -lOpenCL

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.104080120160200SE +/- 0.40, N = 3182.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10306090120150Min: 181.2 / Avg: 182 / Max: 182.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10130260390520650SE +/- 2.64, N = 3624.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10110220330440550Min: 619.6 / Avg: 624.87 / Max: 627.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.32, N = 339.70
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 22.10816243240Min: 39.07 / Avg: 39.7 / Max: 40.04

Device: CPU - Batch Size: 16 - Model: ResNet-50

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterUbuntu 22.10Clear Linux246810SE +/- 0.015, N = 3SE +/- 0.094, N = 157.2977.7321. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterUbuntu 22.10Clear Linux3691215Min: 7.27 / Avg: 7.3 / Max: 7.32Min: 7.33 / Avg: 7.73 / Max: 8.241. (CXX) g++ options: -O2 -lOpenCL

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestUbuntu 22.1040M80M120M160M200MSE +/- 360555.13, N = 31956000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestUbuntu 22.1030M60M90M120M150MMin: 194900000 / Avg: 195600000 / Max: 1961000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: OFDM_Test

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/src/phy/dft/test/ofdm_test: No such file or directory

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Ubuntu 22.10369121511.07

Benchmark: induct2

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 22.10Clear Linux150300450600750SE +/- 1.16, N = 3SE +/- 3.29, N = 3693.8660.9MIN: 500.78 / MAX: 696.1MIN: 480.28 / MAX: 666.31
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 22.10Clear Linux120240360480600Min: 692.33 / Avg: 693.82 / Max: 696.1Min: 654.95 / Avg: 660.91 / Max: 666.31

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsUbuntu 22.10510152025SE +/- 0.24, N = 321.86
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsUbuntu 22.10510152025Min: 21.43 / Avg: 21.86 / Max: 22.27

Target Bandwidth: 24 kbps

Clear Linux: The test quit with a non-zero exit status. E: encodec: line 2: /.local/bin/encodec: No such file or directory

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesUbuntu 22.10Clear Linux1122334455SE +/- 0.15, N = 3SE +/- 0.03, N = 350.735.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesUbuntu 22.10Clear Linux1020304050Min: 50.5 / Avg: 50.7 / Max: 51Min: 35.3 / Avg: 35.37 / Max: 35.4

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgUbuntu 22.104K8K12K16K20KSE +/- 31.07, N = 320855
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgUbuntu 22.104K8K12K16K20KMin: 20795 / Avg: 20855 / Max: 20899

Model: en_core_web_lg

Clear Linux: The test quit with a non-zero exit status. E: ValueError: 'in' is not a valid parameter name

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfUbuntu 22.105001000150020002500SE +/- 23.73, N = 32523
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfUbuntu 22.10400800120016002000Min: 2484 / Avg: 2523.33 / Max: 2566

Model: en_core_web_trf

Clear Linux: The test quit with a non-zero exit status. E: ValueError: 'in' is not a valid parameter name

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsUbuntu 22.10Clear Linux3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 311.511.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsUbuntu 22.10Clear Linux3691215Min: 11.5 / Avg: 11.5 / Max: 11.5Min: 11.1 / Avg: 11.1 / Max: 11.1

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ubuntu 22.10Clear Linux200M400M600M800M1000MSE +/- 10982623.14, N = 3SE +/- 11825345.66, N = 38597466671003120000-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ubuntu 22.10Clear Linux200M400M600M800M1000MMin: 840550000 / Avg: 859746666.67 / Max: 878590000Min: 984920000 / Avg: 1003120000 / Max: 10253000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 22.10Clear Linux80160240320400SE +/- 0.53, N = 3SE +/- 3.01, N = 3384.5357.8MIN: 357.94 / MAX: 465.15MIN: 334.23 / MAX: 408.79
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 22.10Clear Linux70140210280350Min: 383.5 / Avg: 384.54 / Max: 385.26Min: 353.98 / Avg: 357.75 / Max: 363.71

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsUbuntu 22.10510152025SE +/- 0.23, N = 319.30
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsUbuntu 22.10510152025Min: 18.85 / Avg: 19.3 / Max: 19.6

Target Bandwidth: 6 kbps

Clear Linux: The test quit with a non-zero exit status. E: encodec: line 2: /.local/bin/encodec: No such file or directory

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsUbuntu 22.10510152025SE +/- 0.17, N = 319.18
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsUbuntu 22.10510152025Min: 18.84 / Avg: 19.18 / Max: 19.35

Target Bandwidth: 3 kbps

Clear Linux: The test quit with a non-zero exit status. E: encodec: line 2: /.local/bin/encodec: No such file or directory

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsUbuntu 22.10510152025SE +/- 0.18, N = 318.55
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsUbuntu 22.10510152025Min: 18.23 / Avg: 18.55 / Max: 18.86

Target Bandwidth: 1.5 kbps

Clear Linux: The test quit with a non-zero exit status. E: encodec: line 2: /.local/bin/encodec: No such file or directory

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KUbuntu 22.10Clear Linux306090120150SE +/- 3.19, N = 12SE +/- 1.15, N = 10142.19146.71-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KUbuntu 22.10Clear Linux306090120150Min: 107.49 / Avg: 142.19 / Max: 146.66Min: 136.68 / Avg: 146.71 / Max: 148.861. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 22.10400800120016002000SE +/- 26.27, N = 31957.9MIN: 1750.66 / MAX: 2219.24
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 22.1030060090012001500Min: 1926.33 / Avg: 1957.9 / Max: 2010.06

Test: In-Memory Database Shootout

Clear Linux: The test run did not produce a result.

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CUbuntu 22.10Clear Linux2K4K6K8K10KSE +/- 24.89, N = 3SE +/- 4.34, N = 38583.038580.50-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CUbuntu 22.10Clear Linux15003000450060007500Min: 8551.04 / Avg: 8583.03 / Max: 8632.06Min: 8573.7 / Avg: 8580.5 / Max: 8588.581. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KUbuntu 22.10Clear Linux4080120160200SE +/- 1.60, N = 15SE +/- 3.28, N = 15202.12201.71-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KUbuntu 22.10Clear Linux4080120160200Min: 185.82 / Avg: 202.12 / Max: 206.4Min: 180.83 / Avg: 201.71 / Max: 212.771. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.44, N = 3113.16
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetUbuntu 22.1020406080100Min: 112.28 / Avg: 113.16 / Max: 113.61

Device: CPU - Batch Size: 32 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BUbuntu 22.10Clear Linux5K10K15K20K25KSE +/- 227.34, N = 3SE +/- 64.59, N = 322786.6021769.33-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BUbuntu 22.10Clear Linux4K8K12K16K20KMin: 22367.68 / Avg: 22786.6 / Max: 23149.13Min: 21667.86 / Avg: 21769.33 / Max: 21889.31. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowUbuntu 22.1036912159.54

Benchmark: rnflow

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10Clear Linux160320480640800SE +/- 0.21, N = 3SE +/- 0.37, N = 3664.7722.6
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10Clear Linux130260390520650Min: 664.3 / Avg: 664.7 / Max: 665Min: 722.1 / Avg: 722.57 / Max: 723.3

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducUbuntu 22.100.76051.5212.28153.0423.80253.38

Benchmark: doduc

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10Clear Linux150300450600750SE +/- 1.58, N = 3SE +/- 3.41, N = 3671.3713.0
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10Clear Linux130260390520650Min: 668.1 / Avg: 671.27 / Max: 672.9Min: 708.3 / Avg: 712.97 / Max: 719.6

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: HighUbuntu 22.10Clear Linux150300450600750SE +/- 6.27, N = 3SE +/- 3.33, N = 3665.9713.9
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: HighUbuntu 22.10Clear Linux130260390520650Min: 653.4 / Avg: 665.9 / Max: 673.1Min: 710 / Avg: 713.87 / Max: 720.5

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.42, N = 3235.17
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetUbuntu 22.104080120160200Min: 234.6 / Avg: 235.17 / Max: 235.99

Device: CPU - Batch Size: 64 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighUbuntu 22.10Clear Linux160320480640800SE +/- 2.63, N = 3SE +/- 1.07, N = 3683.9731.8
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighUbuntu 22.10Clear Linux130260390520650Min: 679.3 / Avg: 683.9 / Max: 688.4Min: 729.9 / Avg: 731.83 / Max: 733.6

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkUbuntu 22.10Clear Linux20406080100SE +/- 0.39, N = 3SE +/- 0.64, N = 370.3297.56-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkUbuntu 22.10Clear Linux20406080100Min: 69.86 / Avg: 70.32 / Max: 71.1Min: 96.28 / Avg: 97.56 / Max: 98.221. (CXX) g++ options: -fopenmp -ljpeg -lz -lm

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Ubuntu 22.10Clear Linux12002400360048006000SE +/- 69.40, N = 3SE +/- 7.05, N = 35198.75528.4-lboost_timer -lboost_system -lboost_chrono1. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Ubuntu 22.10Clear Linux10002000300040005000Min: 5059.9 / Avg: 5198.67 / Max: 5270.3Min: 5514.5 / Avg: 5528.4 / Max: 5537.41. (CXX) g++ options: -O3 -march=native -rdynamic

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaUbuntu 22.101.15432.30863.46294.61725.77155.13

Benchmark: capacita

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KUbuntu 22.101020304050SE +/- 0.13, N = 344.871. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KUbuntu 22.10918273645Min: 44.63 / Avg: 44.87 / Max: 45.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

Clear Linux: The test run did not produce a result.

CloudSuite Graph Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 63.87, N = 39985
OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph AnalyticsUbuntu 22.102K4K6K8K10KMin: 9918 / Avg: 9985.33 / Max: 10113

Clear Linux: The test run did not produce a result.

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1020406080100SE +/- 0.24, N = 3107.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1020406080100Min: 107.1 / Avg: 107.57 / Max: 107.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_nr_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.24, N = 3224.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.104080120160200Min: 224 / Avg: 224.33 / Max: 224.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_nr_test: No such file or directory

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.1050100150200250SE +/- 2.07, N = 3242.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.104080120160200Min: 238.5 / Avg: 242.6 / Max: 245.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.10150300450600750SE +/- 8.14, N = 3683.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.10120240360480600Min: 667.5 / Avg: 683.73 / Max: 692.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinUbuntu 22.102468106.93

Benchmark: protein

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessUbuntu 22.10Clear Linux0.56031.12061.68092.24122.8015SE +/- 0.00, N = 3SE +/- 0.00, N = 32.302.49-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessUbuntu 22.10Clear Linux246810Min: 2.29 / Avg: 2.3 / Max: 2.3Min: 2.49 / Avg: 2.49 / Max: 2.51. (CC) gcc options: -fvisibility=hidden -lm

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acUbuntu 22.100.84831.69662.54493.39324.24153.77

Benchmark: ac

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesUbuntu 22.10Clear Linux100200300400500SE +/- 0.33, N = 3SE +/- 0.33, N = 3474401
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesUbuntu 22.10Clear Linux80160240320400Min: 473 / Avg: 473.67 / Max: 474Min: 400 / Avg: 400.67 / Max: 401

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.12, N = 3233.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.104080120160200Min: 233.1 / Avg: 233.3 / Max: 233.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.10140280420560700SE +/- 1.34, N = 3633.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.10110220330440550Min: 631 / Avg: 633.1 / Max: 635.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM

Clear Linux: The test run did not produce a result. E: srsran: line 3: ./lib/test/phy/phy_dl_test: No such file or directory

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airUbuntu 22.100.20930.41860.62790.83721.04650.93

Benchmark: air

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KUbuntu 22.101530456075SE +/- 0.50, N = 366.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KUbuntu 22.101326395265Min: 65.43 / Avg: 66.05 / Max: 67.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

Clear Linux: The test run did not produce a result.

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteUbuntu 22.10Clear Linux700K1400K2100K2800K3500KSE +/- 2367.31, N = 3SE +/- 5678.93, N = 316175963355423
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteUbuntu 22.10Clear Linux600K1200K1800K2400K3000KMin: 1612979 / Avg: 1617596.33 / Max: 1620812Min: 3344158 / Avg: 3355423.33 / Max: 3362309

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux0.91781.83562.75343.67124.589SE +/- 0.02741, N = 3SE +/- 0.00599, N = 34.079233.35354MIN: 4.01-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 3.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux246810Min: 4.05 / Avg: 4.08 / Max: 4.13Min: 3.34 / Avg: 3.35 / Max: 3.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.43, N = 3206.94
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetUbuntu 22.104080120160200Min: 206.42 / Avg: 206.94 / Max: 207.8

Device: CPU - Batch Size: 32 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestUbuntu 22.10Clear Linux5K10K15K20K25KSE +/- 43.21, N = 3SE +/- 196.47, N = 818147227351. Nodejs
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestUbuntu 22.10Clear Linux4K8K12K16K20KMin: 18068 / Avg: 18146.67 / Max: 18217Min: 21866 / Avg: 22735.13 / Max: 234381. Nodejs

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateUbuntu 22.10Clear Linux0.00020.00040.00060.00080.001SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0010.001
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateUbuntu 22.10Clear Linux12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KUbuntu 22.10Clear Linux20406080100SE +/- 0.77, N = 3SE +/- 0.34, N = 377.2676.30-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KUbuntu 22.10Clear Linux1530456075Min: 76.13 / Avg: 77.26 / Max: 78.72Min: 75.9 / Avg: 76.3 / Max: 76.981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.12, N = 3117.39
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetUbuntu 22.1020406080100Min: 117.18 / Avg: 117.39 / Max: 117.59

Device: CPU - Batch Size: 16 - Model: GoogLeNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.15, N = 385.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KUbuntu 22.101632486480Min: 85.16 / Avg: 85.34 / Max: 85.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

Clear Linux: The test run did not produce a result.

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapUbuntu 22.10400800120016002000SE +/- 14.59, N = 71638
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapUbuntu 22.1030060090012001500Min: 1568 / Avg: 1637.86 / Max: 1686

Java Test: Tradesoap

Clear Linux: The test quit with a non-zero exit status. E: Caused by: java.lang.ExceptionInInitializerError: Exception java.lang.ExceptionInInitializerError [in thread "main"]

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.10, N = 386.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KUbuntu 22.101632486480Min: 86.33 / Avg: 86.52 / Max: 86.691. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

Clear Linux: The test run did not produce a result.

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CUbuntu 22.10Clear Linux5K10K15K20K25KSE +/- 295.81, N = 3SE +/- 39.24, N = 324905.5024835.87-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CUbuntu 22.10Clear Linux4K8K12K16K20KMin: 24589.27 / Avg: 24905.5 / Max: 25496.64Min: 24765.9 / Avg: 24835.87 / Max: 24901.651. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodUbuntu 22.100.62331.24661.86992.49323.11652.77

Benchmark: aermod

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

CloudSuite In-Memory Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 47.35, N = 310160
OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory AnalyticsUbuntu 22.102K4K6K8K10KMin: 10077 / Avg: 10159.67 / Max: 10241

Clear Linux: The test run did not produce a result.

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KUbuntu 22.10Clear Linux20406080100SE +/- 1.00, N = 3SE +/- 0.71, N = 3105.39106.10-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KUbuntu 22.10Clear Linux20406080100Min: 103.39 / Avg: 105.39 / Max: 106.5Min: 105.06 / Avg: 106.1 / Max: 107.451. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxUbuntu 22.100.67951.3592.03852.7183.39753.02

Benchmark: mdbx

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux1.29882.59763.89645.19526.494SE +/- 0.00365, N = 3SE +/- 0.02366, N = 35.772285.75329MIN: 5.56-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 5.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux246810Min: 5.77 / Avg: 5.77 / Max: 5.78Min: 5.71 / Avg: 5.75 / Max: 5.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetUbuntu 22.104080120160200SE +/- 0.36, N = 3162.47
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetUbuntu 22.10306090120150Min: 161.81 / Avg: 162.47 / Max: 163.05

Device: CPU - Batch Size: 16 - Model: AlexNet

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KUbuntu 22.10Clear Linux306090120150SE +/- 0.42, N = 3SE +/- 1.33, N = 3123.49121.67-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KUbuntu 22.10Clear Linux20406080100Min: 122.7 / Avg: 123.49 / Max: 124.13Min: 120.23 / Avg: 121.67 / Max: 124.331. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

Scale: 26

Ubuntu 22.10: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node phoronix-System-Product-Name exited on signal 9 (Killed).

Clear Linux: The test quit with a non-zero exit status. E: AML: Fatal: non power2 groupsize unsupported. Define macro PROCS_PER_NODE_NOT_POWER_OF_TWO to override

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionUbuntu 22.10Clear Linux1.16552.3313.49654.6625.8275SE +/- 0.01, N = 3SE +/- 0.00, N = 35.005.18-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionUbuntu 22.10Clear Linux246810Min: 4.98 / Avg: 5 / Max: 5.01Min: 5.18 / Avg: 5.18 / Max: 5.181. (CC) gcc options: -fvisibility=hidden -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KUbuntu 22.10Clear Linux4080120160200SE +/- 0.68, N = 3SE +/- 0.20, N = 3155.19158.73-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KUbuntu 22.10Clear Linux306090120150Min: 153.85 / Avg: 155.19 / Max: 156.11Min: 158.38 / Avg: 158.73 / Max: 159.081. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KUbuntu 22.10Clear Linux306090120150SE +/- 1.70, N = 3SE +/- 0.71, N = 3147.87149.92-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KUbuntu 22.10Clear Linux306090120150Min: 144.99 / Avg: 147.87 / Max: 150.88Min: 148.91 / Avg: 149.92 / Max: 151.291. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkUbuntu 22.100.30150.6030.90451.2061.50751.34

Benchmark: linpk

Clear Linux: The test quit with a non-zero exit status. E: cat: '*.sum': No such file or directory

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 512 - Model: ResNet-50

Ubuntu 22.10: The test quit with a non-zero exit status.

Clear Linux: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultUbuntu 22.10Clear Linux369121512.0313.26

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansUbuntu 22.10400800120016002000SE +/- 4.66, N = 41689
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansUbuntu 22.1030060090012001500Min: 1676 / Avg: 1688.5 / Max: 1697

Java Test: Tradebeans

Clear Linux: The test quit with a non-zero exit status. E: Caused by: java.lang.ExceptionInInitializerError: Exception java.lang.ExceptionInInitializerError [in thread "main"]

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonUbuntu 22.10Clear Linux400800120016002000SE +/- 8.38, N = 4SE +/- 9.63, N = 417101556
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonUbuntu 22.10Clear Linux30060090012001500Min: 1690 / Avg: 1710.25 / Max: 1725Min: 1536 / Avg: 1555.75 / Max: 1582

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KUbuntu 22.10Clear Linux50100150200250SE +/- 2.00, N = 3SE +/- 1.88, N = 3216.52224.04-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KUbuntu 22.10Clear Linux4080120160200Min: 213.87 / Avg: 216.52 / Max: 220.43Min: 221.65 / Avg: 224.04 / Max: 227.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux0.77481.54962.32443.09923.874SE +/- 0.00262, N = 3SE +/- 0.00241, N = 33.443473.39332MIN: 3.41-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lpthread - MIN: 3.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUUbuntu 22.10Clear Linux246810Min: 3.44 / Avg: 3.44 / Max: 3.45Min: 3.39 / Avg: 3.39 / Max: 3.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CUbuntu 22.10Clear Linux7001400210028003500SE +/- 0.66, N = 3SE +/- 15.61, N = 33262.423015.68-lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CUbuntu 22.10Clear Linux6001200180024003000Min: 3261.63 / Avg: 3262.42 / Max: 3263.73Min: 2989.78 / Avg: 3015.68 / Max: 3043.721. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi2. Ubuntu 22.10: Open MPI 4.1.43. Clear Linux: 3.2

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Ubuntu 22.10Clear Linux48121620SE +/- 0.19, N = 3SE +/- 0.01, N = 316.1617.16-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Ubuntu 22.10Clear Linux48121620Min: 15.78 / Avg: 16.16 / Max: 16.35Min: 17.14 / Avg: 17.16 / Max: 17.171. (CC) gcc options: -fvisibility=hidden -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultUbuntu 22.10Clear Linux612182430SE +/- 0.28, N = 3SE +/- 0.01, N = 324.9827.17-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fvisibility=hidden -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultUbuntu 22.10Clear Linux612182430Min: 24.42 / Avg: 24.98 / Max: 25.26Min: 27.15 / Avg: 27.17 / Max: 27.181. (CC) gcc options: -fvisibility=hidden -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

Ubuntu 22.10: The test quit with a non-zero exit status.

Clear Linux: The test quit with a non-zero exit status.

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeUbuntu 22.10Clear Linux306090120150SE +/- 0.00, N = 3SE +/- 0.67, N = 3132117-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options:
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeUbuntu 22.10Clear Linux20406080100Min: 132 / Avg: 132 / Max: 132Min: 116 / Avg: 116.67 / Max: 1181. (CC) gcc options:

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Clear Linux: The test run did not produce a result.

Node.js Octane Benchmark

A Node.js version of the JavaScript Octane Benchmark. Learn more via the OpenBenchmarking.org test page.

Ubuntu 22.10: The test quit with a non-zero exit status. E: ReferenceError: GLOBAL is not defined

Clear Linux: The test quit with a non-zero exit status. E: ReferenceError: GLOBAL is not defined

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

Connections: 20

Ubuntu 22.10: The test quit with a non-zero exit status.

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

Connections: 1

Ubuntu 22.10: The test quit with a non-zero exit status.

Clear Linux: The test quit with a non-zero exit status. E: nginx: line 2: ./wrk-4.2.0/wrk: No such file or directory

358 Results Shown

NWChem
Blender
OpenVKL
Timed Linux Kernel Compilation
TensorFlow
memtier_benchmark
ONNX Runtime:
  ArcFace ResNet-100 - CPU - Standard
  GPT-2 - CPU - Standard
TensorFlow
High Performance Conjugate Gradient
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
JPEG XL libjxl:
  JPEG - 100
  PNG - 100
memtier_benchmark
OpenRadioss
memtier_benchmark
OSPRay Studio
IndigoBench
OpenRadioss
OpenSSL
Blender
FFmpeg:
  libx264 - Upload:
    FPS
    Seconds
Apache Spark:
  1000000 - 500 - SHA-512 Benchmark Time
  1000000 - 500 - Group By Test Time
  1000000 - 500 - Broadcast Inner Join Test Time
  1000000 - 500 - Calculate Pi Benchmark Using Dataframe
  1000000 - 500 - Calculate Pi Benchmark
  1000000 - 500 - Repartition Test Time
  1000000 - 500 - Inner Join Test Time
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
Renaissance
OSPRay Studio
HammerDB - MariaDB:
  64 - 250:
    Transactions Per Minute
    New Orders Per Minute
  64 - 100:
    Transactions Per Minute
    New Orders Per Minute
  32 - 100:
    Transactions Per Minute
    New Orders Per Minute
  32 - 250:
    Transactions Per Minute
    New Orders Per Minute
  8 - 100:
    Transactions Per Minute
    New Orders Per Minute
  16 - 250:
    Transactions Per Minute
    New Orders Per Minute
  16 - 100:
    Transactions Per Minute
    New Orders Per Minute
  8 - 250:
    Transactions Per Minute
    New Orders Per Minute
Renaissance
OSPRay Studio
Stress-NG:
  Atomic
  CPU Cache
Rodinia
OSPRay Studio
Blender
Java Gradle Build
Renaissance
Stress-NG:
  Futex
  Socket Activity
GROMACS
TensorFlow
Polyhedron Fortran Benchmarks
Timed Node.js Compilation
FFmpeg:
  libx265 - Upload:
    FPS
    Seconds
OSPRay Studio
FFmpeg:
  libx265 - Video On Demand:
    FPS
    Seconds
  libx265 - Platform:
    FPS
    Seconds
ONNX Runtime
Polyhedron Fortran Benchmarks
ONNX Runtime:
  bertsquad-12 - CPU - Standard
  yolov4 - CPU - Standard
  super-resolution-10 - CPU - Standard
FinanceBench
Renaissance
OpenRadioss
OSPRay Studio
FFmpeg:
  libx264 - Video On Demand:
    FPS
    Seconds
OpenRadioss
FFmpeg:
  libx264 - Platform:
    FPS
    Seconds
Appleseed
OSPRay Studio
TensorFlow
Xmrig
Polyhedron Fortran Benchmarks
SVT-HEVC
Renaissance
PyPerformance
Stress-NG:
  Context Switching
  Glibc C String Functions
NAS Parallel Benchmarks
TensorFlow
nginx:
  1000
  500
  200
  100
Cpuminer-Opt
Stress-NG:
  Mutex
  Crypto
FinanceBench
Rodinia
JPEG XL libjxl:
  JPEG - 80
  PNG - 80
Warsow
oneDNN
Warsow
Cpuminer-Opt
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
oneDNN
Blender
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
oneDNN
OSPRay Studio
Chaos Group V-RAY
JPEG XL libjxl:
  JPEG - 90
  PNG - 90
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
Xonotic
Renaissance
IndigoBench
Xonotic
Renaissance
Appleseed
Xmrig
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
Stress-NG
OpenSSL:
  RSA4096:
    verify/s
    sign/s
NAS Parallel Benchmarks
Timed CPython Compilation
NAS Parallel Benchmarks
Appleseed
7-Zip Compression:
  Decompression Rating
  Compression Rating
TensorFlow
SVT-AV1
Intel Open Image Denoise
oneDNN
Renaissance
Blender
Xonotic:
  3840 x 2160 - Ultra
  1920 x 1080 - Ultra
Rodinia
Polyhedron Fortran Benchmarks
miniBUDE:
  OpenMP - BM1:
    Billion Interactions/s
    GFInst/s
TensorFlow
NAS Parallel Benchmarks
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Stress-NG
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Node.js V8 Web Tooling Benchmark
NAMD
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
Polyhedron Fortran Benchmarks
Timed Linux Kernel Compilation
NAS Parallel Benchmarks
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
Stargate Digital Audio Workstation:
  96000 - 512
  96000 - 1024
NAS Parallel Benchmarks
FFmpeg:
  libx265 - Live:
    FPS
    Seconds
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
oneDNN
Apache Spark:
  1000000 - 100 - SHA-512 Benchmark Time
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 100 - Inner Join Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Calculate Pi Benchmark
Chia Blockchain VDF
Stress-NG
Chia Blockchain VDF
TensorFlow
RawTherapee
PyPerformance
Tesseract
PyHPC Benchmarks
Tesseract
SQLite Speedtest
Stress-NG
Polyhedron Fortran Benchmarks
Stress-NG
Cpuminer-Opt:
  x25x
  Garlicoin
Stress-NG
Cpuminer-Opt
Stress-NG:
  MMAP
  Glibc Qsort Data Sorting
  IO_uring
  NUMA
  Malloc
  SENDFILE
  MEMFD
  Matrix Math
  Semaphores
OpenFOAM:
  drivaerFastback, Small Mesh Size - Mesh Time
  drivaerFastback, Small Mesh Size - Execution Time
Cpuminer-Opt:
  Deepcoin
  Magi
  LBC, LBRY Credits
  Quad SHA-256, Pyrite
  Triple SHA-256, Onecoin
  Ringcoin
PyPerformance
Stargate Digital Audio Workstation
Timed Wasmer Compilation
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 256-QAM:
    UE Mb/s
    eNb Mb/s
Stargate Digital Audio Workstation
PyPerformance:
  float
  chaos
  regex_compile
FFmpeg:
  libx264 - Live:
    FPS
    Seconds
PyPerformance:
  pickle_pure_python
  2to3
WebP Image Encode
PyPerformance
DaCapo Benchmark
PyPerformance:
  pathlib
  go
Rodinia
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 64-QAM:
    UE Mb/s
    eNb Mb/s
TensorFlow
Rodinia
srsRAN
Polyhedron Fortran Benchmarks
Renaissance
EnCodec
PyPerformance
spaCy:
  en_core_web_lg
  en_core_web_trf
PyPerformance
Liquid-DSP
Renaissance
EnCodec:
  6 kbps
  3 kbps
  1.5 kbps
SVT-VP9
Renaissance
NAS Parallel Benchmarks
SVT-HEVC
TensorFlow
NAS Parallel Benchmarks
Polyhedron Fortran Benchmarks
Unvanquished
Polyhedron Fortran Benchmarks
Unvanquished:
  1920 x 1080 - Ultra
  3840 x 2160 - High
TensorFlow
Unvanquished
LibRaw
QuantLib
Polyhedron Fortran Benchmarks
AOM AV1
CloudSuite Graph Analytics
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
  4G PHY_DL_Test 100 PRB SISO 256-QAM:
    UE Mb/s
    eNb Mb/s
Polyhedron Fortran Benchmarks
WebP Image Encode
Polyhedron Fortran Benchmarks
PyBench
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
Polyhedron Fortran Benchmarks
AOM AV1
PHPBench
oneDNN
TensorFlow
Node.js Express HTTP Load Test
PyHPC Benchmarks
SVT-AV1
TensorFlow
AOM AV1
DaCapo Benchmark
AOM AV1
NAS Parallel Benchmarks
Polyhedron Fortran Benchmarks
CloudSuite In-Memory Analytics
SVT-HEVC
Polyhedron Fortran Benchmarks
oneDNN
TensorFlow
SVT-VP9
WebP Image Encode
SVT-VP9
SVT-AV1
Polyhedron Fortran Benchmarks
Timed CPython Compilation
DaCapo Benchmark:
  Tradebeans
  Jython
SVT-AV1
oneDNN
NAS Parallel Benchmarks
WebP Image Encode:
  Quality 100
  Default
ctx_clock