Core i9 13900K Linux Distros

Intel Core i9-13900K testing with a ASUS PRIME Z790-P WIFI (0602 BIOS) and AMD Radeon RX 6800 XT 16GB on Ubuntu 22.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211049-NE-COREI913914
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Ubuntu 22.10
November 02 2022
  1 Day, 1 Hour, 48 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 13900K Linux DistrosOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB1000GB Western Digital WDS100T1X0E-00AFY0AMD Radeon RX 6800 XT 16GB (2575/1000MHz)Realtek ALC897ASUS VP28URealtek RTL8125 2.5GbE + Intel Device 7a70Ubuntu 22.105.19.0-23-generic (x86_64)GNOME Shell 43.0X Server + Wayland4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)1.3.224GCC 12.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionCore I9 13900K Linux Distros PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x10e - Thermald 2.5.1 - BAR1 / Visible vRAM Size: 16368 MB - vBIOS Version: 113-D4120500-101- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu1)- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Core i9 13900K Linux Distrosnginx: 1000nginx: 500nginx: 200nginx: 100cloudsuite-ma: cloudsuite-ga: v-ray: CPUencodec: 1.5 kbpsencodec: 24 kbpsencodec: 6 kbpsencodec: 3 kbpsphpbench: PHP Benchmark Suiteappleseed: Material Testerappleseed: Disney Materialappleseed: Emilyonnx: super-resolution-10 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: GPT-2 - CPU - Standardnatron: Spaceshippyperformance: pickle_pure_pythonpyperformance: django_templatepyperformance: python_startuppyperformance: regex_compilepyperformance: crypto_pyaespyperformance: json_loadspyperformance: raytracepyperformance: pathlibpyperformance: nbodypyperformance: floatpyperformance: chaospyperformance: 2to3pyperformance: gopybench: Total For Average Test Timesindigobench: CPU - Supercarindigobench: CPU - Bedroomopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUctx-clock: Context Switch Timeblender: Pabellon Barcelona - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Classroom - CPU-Onlyblender: BMW27 - CPU-Onlyspacy: en_core_web_trfspacy: en_core_web_lgstress-ng: System V Message Passingstress-ng: Glibc Qsort Data Sortingstress-ng: Glibc C String Functionsstress-ng: Context Switchingstress-ng: Memory Copyingstress-ng: x86_64 RdRandstress-ng: Vector Mathstress-ng: Matrix Mathstress-ng: Semaphoresstress-ng: CPU Stressstress-ng: CPU Cachestress-ng: SENDFILEstress-ng: IO_uringstress-ng: Forkingstress-ng: Mallocstress-ng: Cryptostress-ng: Mutexstress-ng: MEMFDstress-ng: Futexstress-ng: NUMAstress-ng: MMAPmemtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 50 - 10:1deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamrawtherapee: Total Benchmark Timesqlite-speedtest: Timed Time - Size 1,000tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 256 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 16 - AlexNethammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 100hammerdb-mariadb: 64 - 100hammerdb-mariadb: 32 - 250hammerdb-mariadb: 32 - 250hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 100hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 100hammerdb-mariadb: 16 - 100hammerdb-mariadb: 8 - 250hammerdb-mariadb: 8 - 250hammerdb-mariadb: 8 - 100hammerdb-mariadb: 8 - 100gromacs: MPI CPU - water_GMX50_barefinancebench: Bonds OpenMPfinancebench: Repo OpenMPspark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - SHA-512 Benchmark Timeclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheliquid-dsp: 8 - 256 - 57node-web-tooling: openssl: RSA4096openssl: RSA4096openssl: SHA256cpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: LBC, LBRY Creditscpuminer-opt: Myriad-Groestlcpuminer-opt: Skeincoincpuminer-opt: Garlicoincpuminer-opt: Blake-2 Scpuminer-opt: Ringcoincpuminer-opt: Deepcoincpuminer-opt: scryptcpuminer-opt: x25xcpuminer-opt: Magiffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandffmpeg: libx265 - Platformffmpeg: libx265 - Platformffmpeg: libx264 - Platformffmpeg: libx264 - Platformffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadffmpeg: libx265 - Liveffmpeg: libx265 - Liveffmpeg: libx264 - Liveffmpeg: libx264 - Livebuild-wasmer: Time To Compileospray-studio: 3 - 1080p - 32 - Path Tracerospray-studio: 1 - 1080p - 32 - Path Tracerospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 3 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerospray-studio: 3 - 4K - 1 - Path Tracerospray-studio: 1 - 4K - 1 - Path Traceronednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - f32 - CPUbuild-python: Released Build, PGO + LTO Optimizedbuild-python: Defaultbuild-nodejs: Time To Compilebuild-linux-kernel: allmodconfigbuild-linux-kernel: defconfigstargate: 96000 - 1024stargate: 44100 - 1024stargate: 96000 - 512stargate: 44100 - 512compress-7zip: Decompression Ratingcompress-7zip: Compression Ratingopenvkl: vklBenchmark ISPCoidn: RT.ldr_alb_nrm.3840x2160svt-vp9: Visual Quality Optimized - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 4Ksvt-hevc: 1 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Knode-express-loadtest: libraw: Post-Processing Benchmarksrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: OFDM_Testwebp: Quality 100, Lossless, Highest Compressionwebp: Quality 100, Highest Compressionwebp: Quality 100, Losslesswebp: Quality 100webp: Defaultjpegxl: JPEG - 100jpegxl: PNG - 100jpegxl: JPEG - 90jpegxl: JPEG - 80jpegxl: PNG - 90jpegxl: PNG - 80compress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedrenaissance: Genetic Algorithm Using Jenetics + Futuresrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: In-Memory Database Shootoutrenaissance: Finagle HTTP Requestsrenaissance: Apache Spark PageRankrenaissance: Savina Reactors.IOrenaissance: Apache Spark Bayesrenaissance: Apache Spark ALSrenaissance: ALS Movie Lensrenaissance: Rand Forestrenaissance: Scala Dottydacapobench: Tradebeansdacapobench: Tradesoapdacapobench: Jythonjava-gradle-perf: Reactorchia-vdf: Square Assembly Optimizedchia-vdf: Square Plain C++xmrig: Wownero - 1Mxmrig: Monero - 1Mopenradioss: Rubber O-Ring Seal Installationopenradioss: Bird Strike on Windshieldopenradioss: Cell Phone Drop Testopenradioss: Bumper Beamopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timenwchem: C240 Buckyballpolyhedron: mp_prop_designpolyhedron: test_fpu2polyhedron: gas_dyn2polyhedron: fatigue2polyhedron: channel2polyhedron: capacitapolyhedron: proteinpolyhedron: induct2polyhedron: rnflowpolyhedron: aermodpolyhedron: tfft2polyhedron: linpkpolyhedron: doducpolyhedron: mdbxpolyhedron: airpolyhedron: acnamd: ATPase Simulation - 327,506 Atomsrodinia: OpenMP Streamclusterrodinia: OpenMP CFD Solverrodinia: OpenMP Leukocyterodinia: OpenMP HotSpot3Drodinia: OpenMP LavaMDnpb: SP.Cnpb: SP.Bnpb: MG.Cnpb: LU.Cnpb: IS.Dnpb: EP.Dnpb: EP.Cnpb: CG.Cnpb: BT.Chpcg: quantlib: xonotic: 3840 x 2160 - Ultimatexonotic: 1920 x 1080 - Ultimatexonotic: 3840 x 2160 - Ultraxonotic: 1920 x 1080 - Ultrawarsow: 3840 x 2160warsow: 1920 x 1080unvanquished: 3840 x 2160 - Ultraunvanquished: 1920 x 1080 - Ultraunvanquished: 3840 x 2160 - Highunvanquished: 1920 x 1080 - Hightesseract: 3840 x 2160tesseract: 1920 x 1080ddnet: 3840 x 2160 - Fullscreen - Vulkan - Default - Multeasymapddnet: 1920 x 1080 - Fullscreen - Vulkan - Default - Multeasymapddnet: 3840 x 2160 - Fullscreen - Vulkan - Default - RaiNyMore2ddnet: 1920 x 1080 - Fullscreen - Vulkan - Default - RaiNyMore2pyhpc: CPU - Numpy - 16384 - Isoneutral Mixingpyhpc: CPU - Numpy - 16384 - Equation of Statestress-ng: Socket Activitystress-ng: Atomicmemtier-benchmark: Redis - 50 - 1:1spark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timeonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUsvt-vp9: VMAF Optimized - Bosphorus 4Kdacapobench: H2npb: FT.CUbuntu 22.10192021.95203069.78205841.24204910.461016099852873418.54821.85519.30319.182161759690.77577884.859545164.93999668425981331210687105786.719722.27.3982.150.711.52088.7761.446.744.915811247411.8024.0510.7233018.171.6414593.7110.96728.9114.631638.99125.9763.4851.14468.508.72916.05438.2318.2321.43372.852238.023.552222.923.581570.465.08132178.95576.3575.32147.6451.6125232085513432520.51411.394307014.9414703175.327385.3982767.76119832.03109789.423538392.4151634.5498.77588014.9427676.33113514.4336241645.5942378.7916748823.882049.373538590.31681.80742.413456218.183127334.7590.480911.0518964.435512.185624.797040.3232227.141452.544012.492580.0283115.3492103.896610.349796.586078.2359153.127317.590056.8329159.940974.686031.070632.1822236.601550.395390.049511.1047967.893312.049732.34632.806109.3437.06109.1037.68111.2038.51113.1639.70117.39264.59256.55235.17206.94162.47893323844090768390638286135682883153800886541371408627737182721633103889121382361.41430942.00716219315.8294270.791.092.443.2652.242.030.771.022.633.4551.761.99308.97307.65301.6685974666726.33358806.75496.93595699923343446019868052550171301563433496.707655875423.9418520333.581128.001176.0265.12116.3276.3299.2565.14116.3076.2999.2931.9679.0019.48129.619138287182.0827.74353.2414.3030.245464953903714731242187579157185575848311112.822150.833.443477.631325.772284.079231.90430171.50212.029256.701454.88041.3964.8546956.4228354.8184256.1453531399811821531610.57123.49155.19202.12105.395.79216.517147.86777.2563.00486.5285.3466.0544.871814770.32107.6224.3242.6683.7199.2677.5233.3633.1182.0624.91956000000.915.002.3016.1624.981.051.0613.0913.2513.4313.584887.750.94758.380.51063.37183.41957.91992.61902.74193.8693.82026.47650.3384.5454.3168916381710142.79826906725293316463.29652.5108.62183.7568.10114.66151.3448427.3430944268.425.7713.9925.8321.8829.35.136.9311.079.542.7712.11.343.383.020.933.770.609827.2976.16849.97649.66084.64515473.9022786.6024905.5053311.891263.163049.533262.428583.0349771.2910.15495198.7527.8571544540.6037083692.0609456696.9289392951.3965.8664.7671.3665.9683.9896.0168999.44262994.515913.122159.594696.930.0040.00124287.07344361.753124943.720.950.931.492836142.19209122382.81OpenBenchmarking.org

DDraceNetwork

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2 - Total Frame TimeUbuntu 22.10246810Min: 0.03 / Avg: 0.21 / Max: 1.391. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap - Total Frame TimeUbuntu 22.10246810Min: 0.09 / Avg: 0.34 / Max: 1.471. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap - Total Frame TimeUbuntu 22.10246810Min: 0.08 / Avg: 0.17 / Max: 1.231. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2 - Total Frame TimeUbuntu 22.10246810Min: 0.03 / Avg: 0.75 / Max: 3.041. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Ubuntu 22.1040K80K120K160K200KSE +/- 624.40, N = 3192021.951. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 22.1040K80K120K160K200KSE +/- 492.12, N = 3203069.781. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Ubuntu 22.1040K80K120K160K200KSE +/- 636.13, N = 3205841.241. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Ubuntu 22.1040K80K120K160K200KSE +/- 1164.13, N = 3204910.461. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

CloudSuite In-Memory Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 47.35, N = 310160

CloudSuite Graph Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 63.87, N = 39985

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUUbuntu 22.106K12K18K24K30KSE +/- 190.70, N = 328734

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsUbuntu 22.10510152025SE +/- 0.18, N = 318.55

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsUbuntu 22.10510152025SE +/- 0.24, N = 321.86

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsUbuntu 22.10510152025SE +/- 0.23, N = 319.30

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsUbuntu 22.10510152025SE +/- 0.17, N = 319.18

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteUbuntu 22.10300K600K900K1200K1500KSE +/- 2367.31, N = 31617596

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterUbuntu 22.102040608010090.78

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialUbuntu 22.102040608010084.86

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyUbuntu 22.104080120160200164.94

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardUbuntu 22.1015003000450060007500SE +/- 2.84, N = 368421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardUbuntu 22.10130260390520650SE +/- 0.17, N = 35981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardUbuntu 22.10306090120150SE +/- 0.17, N = 31331. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardUbuntu 22.1030060090012001500SE +/- 0.60, N = 312101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardUbuntu 22.10150300450600750SE +/- 0.29, N = 36871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardUbuntu 22.102K4K6K8K10KSE +/- 19.87, N = 3105781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipUbuntu 22.10246810SE +/- 0.03, N = 36.7

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonUbuntu 22.104080120160200SE +/- 0.67, N = 3197

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateUbuntu 22.10510152025SE +/- 0.06, N = 322.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 22.10246810SE +/- 0.04, N = 37.39

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileUbuntu 22.1020406080100SE +/- 0.30, N = 382.1

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesUbuntu 22.101122334455SE +/- 0.15, N = 350.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsUbuntu 22.103691215SE +/- 0.00, N = 311.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceUbuntu 22.1050100150200250SE +/- 0.88, N = 3208

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibUbuntu 22.10246810SE +/- 0.02, N = 38.77

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyUbuntu 22.101428425670SE +/- 0.32, N = 361.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatUbuntu 22.101122334455SE +/- 0.10, N = 346.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosUbuntu 22.101020304050SE +/- 0.13, N = 344.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ubuntu 22.10306090120150SE +/- 0.58, N = 3158

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goUbuntu 22.10306090120150SE +/- 0.33, N = 3112

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesUbuntu 22.10100200300400500SE +/- 0.33, N = 3474

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarUbuntu 22.103691215SE +/- 0.02, N = 311.80

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomUbuntu 22.100.91151.8232.73453.6464.5575SE +/- 0.040, N = 34.051

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.100.1620.3240.4860.6480.81SE +/- 0.00, N = 30.72MIN: 0.42 / MAX: 4.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.107K14K21K28K35KSE +/- 40.02, N = 333018.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.100.3690.7381.1071.4761.845SE +/- 0.00, N = 31.64MIN: 0.87 / MAX: 9.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.103K6K9K12K15KSE +/- 10.91, N = 314593.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.103691215SE +/- 0.03, N = 310.96MIN: 7.62 / MAX: 52.61. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.10160320480640800SE +/- 2.34, N = 3728.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 314.63MIN: 6.68 / MAX: 120.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.10400800120016002000SE +/- 1.15, N = 31638.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.10306090120150SE +/- 0.17, N = 3125.97MIN: 91.09 / MAX: 325.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.101428425670SE +/- 0.09, N = 363.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.101224364860SE +/- 0.05, N = 351.14MIN: 22.47 / MAX: 182.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.10100200300400500SE +/- 0.50, N = 3468.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.10246810SE +/- 0.01, N = 38.72MIN: 5.93 / MAX: 54.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.102004006008001000SE +/- 0.85, N = 3916.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1090180270360450SE +/- 0.21, N = 3438.23MIN: 270.29 / MAX: 1085.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 318.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.10510152025SE +/- 0.11, N = 321.43MIN: 12.24 / MAX: 94.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.1080160240320400SE +/- 1.93, N = 3372.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.105001000150020002500SE +/- 5.80, N = 32238.02MIN: 1692.38 / MAX: 2991.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.100.79881.59762.39643.19523.994SE +/- 0.02, N = 33.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.105001000150020002500SE +/- 6.08, N = 32222.92MIN: 1682.82 / MAX: 2975.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.100.80551.6112.41653.2224.0275SE +/- 0.01, N = 33.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.1030060090012001500SE +/- 3.93, N = 31570.46MIN: 1396.06 / MAX: 1856.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.101.1432.2863.4294.5725.715SE +/- 0.01, N = 35.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeUbuntu 22.10306090120150SE +/- 0.00, N = 3132

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyUbuntu 22.104080120160200SE +/- 0.13, N = 3178.95

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyUbuntu 22.10120240360480600SE +/- 0.42, N = 3576.35

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyUbuntu 22.1020406080100SE +/- 0.11, N = 375.32

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyUbuntu 22.10306090120150SE +/- 0.34, N = 3147.64

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyUbuntu 22.101224364860SE +/- 0.16, N = 351.61

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfUbuntu 22.105001000150020002500SE +/- 23.73, N = 32523

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgUbuntu 22.104K8K12K16K20KSE +/- 31.07, N = 320855

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingUbuntu 22.103M6M9M12M15MSE +/- 181986.71, N = 313432520.511. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingUbuntu 22.1090180270360450SE +/- 0.68, N = 3411.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsUbuntu 22.10900K1800K2700K3600K4500KSE +/- 40661.15, N = 154307014.941. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingUbuntu 22.103M6M9M12M15MSE +/- 181309.12, N = 414703175.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingUbuntu 22.1016003200480064008000SE +/- 10.70, N = 37385.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandUbuntu 22.1020K40K60K80K100KSE +/- 9.58, N = 382767.761. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathUbuntu 22.1030K60K90K120K150KSE +/- 966.54, N = 9119832.031. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathUbuntu 22.1020K40K60K80K100KSE +/- 588.05, N = 3109789.421. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresUbuntu 22.10800K1600K2400K3200K4000KSE +/- 1451.53, N = 33538392.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressUbuntu 22.1011K22K33K44K55KSE +/- 538.64, N = 351634.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheUbuntu 22.1020406080100SE +/- 1.23, N = 1598.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEUbuntu 22.10130K260K390K520K650KSE +/- 3074.02, N = 3588014.941. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringUbuntu 22.106K12K18K24K30KSE +/- 56.03, N = 327676.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingUbuntu 22.1020K40K60K80K100KSE +/- 721.90, N = 3113514.431. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocUbuntu 22.108M16M24M32M40MSE +/- 149024.22, N = 336241645.591. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoUbuntu 22.109K18K27K36K45KSE +/- 294.46, N = 1542378.791. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexUbuntu 22.104M8M12M16M20MSE +/- 110264.57, N = 1516748823.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDUbuntu 22.10400800120016002000SE +/- 18.36, N = 32049.371. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexUbuntu 22.10800K1600K2400K3200K4000KSE +/- 33712.97, N = 153538590.311. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAUbuntu 22.10150300450600750SE +/- 1.84, N = 3681.801. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPUbuntu 22.10160320480640800SE +/- 1.45, N = 3742.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 41614.46, N = 33456218.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 38170.43, N = 43127334.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 6.80, N = 3964.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.14, N = 312.19

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10612182430SE +/- 0.09, N = 324.80

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10918273645SE +/- 0.14, N = 340.32

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.46, N = 3227.14

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.101224364860SE +/- 0.07, N = 352.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 312.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 380.03

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.26, N = 3115.35

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 3103.90

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 310.35

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.14, N = 396.59

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 378.24

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.21, N = 3153.13

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.1048121620SE +/- 0.05, N = 317.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.101326395265SE +/- 0.15, N = 356.83

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200SE +/- 0.49, N = 3159.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 374.69

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.23, N = 331.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.24, N = 332.18

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.71, N = 3236.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.101122334455SE +/- 0.20, N = 350.40

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.05

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 3.09, N = 3967.89

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.04, N = 312.05

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeUbuntu 22.10816243240SE +/- 0.11, N = 332.351. RawTherapee, version 5.8, command line.

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Ubuntu 22.10816243240SE +/- 0.03, N = 332.811. (CC) gcc options: -O2 -lz

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.36, N = 3109.34

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.02, N = 337.06

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.20, N = 3109.10

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.05, N = 337.68

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.22, N = 3111.20

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.08, N = 338.51

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.44, N = 3113.16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.32, N = 339.70

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.12, N = 3117.39

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.08, N = 3264.59

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.32, N = 3256.55

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.42, N = 3235.17

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.43, N = 3206.94

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetUbuntu 22.104080120160200SE +/- 0.36, N = 3162.47

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K893321. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.108K16K24K32K40K384401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K907681. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.108K16K24K32K40K390631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K828611. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.108K16K24K32K40K356821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K883151. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.108K16K24K32K40K380081. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K865411. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.108K16K24K32K40K371401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K862771. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.108K16K24K32K40K371821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.1015K30K45K60K75K721631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.107K14K21K28K35K310381. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K891211. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.108K16K24K32K40K382361. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareUbuntu 22.100.31820.63640.95461.27281.591SE +/- 0.002, N = 31.4141. (CXX) g++ options: -O3

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPUbuntu 22.107K14K21K28K35KSE +/- 28.94, N = 330942.011. (CXX) g++ options: -O3 -march=native -fopenmp

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPUbuntu 22.104K8K12K16K20KSE +/- 17.57, N = 319315.831. (CXX) g++ options: -O3 -march=native -fopenmp

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeUbuntu 22.100.17780.35560.53340.71120.889SE +/- 0.01, N = 150.79

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeUbuntu 22.100.24530.49060.73590.98121.2265SE +/- 0.01, N = 151.09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeUbuntu 22.100.5491.0981.6472.1962.745SE +/- 0.01, N = 152.44

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.73351.4672.20052.9343.6675SE +/- 0.04, N = 153.26

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.06, N = 1552.24

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 22.100.45680.91361.37041.82722.284SE +/- 0.02, N = 152.03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeUbuntu 22.100.17330.34660.51990.69320.8665SE +/- 0.02, N = 30.77

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeUbuntu 22.100.22950.4590.68850.9181.1475SE +/- 0.01, N = 31.02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeUbuntu 22.100.59181.18361.77542.36722.959SE +/- 0.02, N = 32.63

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.77631.55262.32893.10523.8815SE +/- 0.03, N = 33.45

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.18, N = 351.76

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeUbuntu 22.100.44780.89561.34341.79122.239SE +/- 0.01, N = 31.99

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunUbuntu 22.1070140210280350SE +/- 1.54, N = 15308.97MIN: 24.68 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunUbuntu 22.1070140210280350SE +/- 1.50, N = 15307.65MIN: 24.43 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheUbuntu 22.1070140210280350SE +/- 2.14, N = 15301.66MIN: 24.67 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ubuntu 22.10200M400M600M800M1000MSE +/- 10982623.14, N = 38597466671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkUbuntu 22.10612182430SE +/- 0.17, N = 326.33

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.1080K160K240K320K400KSE +/- 142.92, N = 3358806.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.1012002400360048006000SE +/- 5.03, N = 35496.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Ubuntu 22.108000M16000M24000M32000M40000MSE +/- 90907848.90, N = 3359569992331. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinUbuntu 22.1090K180K270K360K450KSE +/- 3729.11, N = 34344601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteUbuntu 22.1040K80K120K160K200KSE +/- 120.14, N = 31986801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsUbuntu 22.1011K22K33K44K55KSE +/- 141.89, N = 3525501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlUbuntu 22.104K8K12K16K20KSE +/- 98.66, N = 3171301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinUbuntu 22.1030K60K90K120K150KSE +/- 1874.06, N = 41563431. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinUbuntu 22.107001400210028003500SE +/- 37.34, N = 33496.701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SUbuntu 22.10160K320K480K640K800KSE +/- 9180.18, N = 37655871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinUbuntu 22.1012002400360048006000SE +/- 21.91, N = 35423.941. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinUbuntu 22.104K8K12K16K20KSE +/- 26.46, N = 3185201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptUbuntu 22.1070140210280350SE +/- 3.47, N = 3333.581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xUbuntu 22.102004006008001000SE +/- 4.19, N = 31128.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiUbuntu 22.1030060090012001500SE +/- 4.42, N = 31176.021. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.101530456075SE +/- 0.08, N = 365.121. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.10306090120150SE +/- 0.14, N = 3116.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.1020406080100SE +/- 0.07, N = 376.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.1020406080100SE +/- 0.09, N = 399.251. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.101530456075SE +/- 0.06, N = 365.141. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.10306090120150SE +/- 0.10, N = 3116.301. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.1020406080100SE +/- 0.03, N = 376.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.1020406080100SE +/- 0.04, N = 399.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.10714212835SE +/- 0.04, N = 331.961. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.1020406080100SE +/- 0.09, N = 379.001. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10510152025SE +/- 0.02, N = 319.481. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10306090120150SE +/- 0.15, N = 3129.621. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.104080120160200SE +/- 0.41, N = 3182.081. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.10714212835SE +/- 0.06, N = 327.741. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.1080160240320400SE +/- 0.46, N = 3353.241. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.1048121620SE +/- 0.02, N = 314.301. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileUbuntu 22.10714212835SE +/- 0.20, N = 330.251. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1010K20K30K40K50KSE +/- 33.28, N = 3464951. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.108K16K24K32K40KSE +/- 48.68, N = 3390371. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 4.33, N = 314731. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 2.73, N = 312421. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1040K80K120K160K200KSE +/- 169.22, N = 31875791. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KSE +/- 533.75, N = 31571851. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1012002400360048006000SE +/- 5.24, N = 357581. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1010002000300040005000SE +/- 6.89, N = 348311. (CXX) g++ options: -O3 -lm -ldl

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUUbuntu 22.102004006008001000SE +/- 1.98, N = 31112.82MIN: 1021.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUUbuntu 22.105001000150020002500SE +/- 21.59, N = 32150.83MIN: 1989.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUUbuntu 22.100.77481.54962.32443.09923.874SE +/- 0.00262, N = 33.44347MIN: 3.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUUbuntu 22.10246810SE +/- 0.10591, N = 157.63132MIN: 2.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUUbuntu 22.101.29882.59763.89645.19526.494SE +/- 0.00365, N = 35.77228MIN: 5.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUUbuntu 22.100.91781.83562.75343.67124.589SE +/- 0.02741, N = 34.07923MIN: 4.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUUbuntu 22.100.42850.8571.28551.7142.1425SE +/- 0.01885, N = 151.90430MIN: 1.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedUbuntu 22.104080120160200171.50

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultUbuntu 22.10369121512.03

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileUbuntu 22.1060120180240300SE +/- 0.04, N = 3256.70

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigUbuntu 22.10100200300400500SE +/- 0.43, N = 3454.88

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigUbuntu 22.10918273645SE +/- 0.39, N = 341.40

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Ubuntu 22.101.09232.18463.27694.36925.4615SE +/- 0.001939, N = 34.8546951. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Ubuntu 22.10246810SE +/- 0.011125, N = 36.4228351. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Ubuntu 22.101.08412.16823.25234.33645.4205SE +/- 0.002423, N = 34.8184251. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Ubuntu 22.10246810SE +/- 0.006537, N = 36.1453531. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingUbuntu 22.1030K60K90K120K150KSE +/- 1890.40, N = 31399811. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingUbuntu 22.1040K80K120K160K200KSE +/- 1518.09, N = 31821531. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCUbuntu 22.104080120160200SE +/- 0.67, N = 3161MIN: 11 / MAX: 1931

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Ubuntu 22.100.12830.25660.38490.51320.6415SE +/- 0.00, N = 30.57

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 0.42, N = 3123.491. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 0.68, N = 3155.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KUbuntu 22.104080120160200SE +/- 1.60, N = 15202.121. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 1.00, N = 3105.391. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KUbuntu 22.101.30282.60563.90845.21126.514SE +/- 0.02, N = 35.791. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KUbuntu 22.1050100150200250SE +/- 2.00, N = 3216.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 1.70, N = 3147.871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.77, N = 377.261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 22.100.67591.35182.02772.70363.3795SE +/- 0.026, N = 33.0041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.10, N = 386.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.15, N = 385.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KUbuntu 22.101530456075SE +/- 0.50, N = 366.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KUbuntu 22.101020304050SE +/- 0.13, N = 344.871. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestUbuntu 22.104K8K12K16K20KSE +/- 43.21, N = 318147

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkUbuntu 22.101632486480SE +/- 0.39, N = 370.321. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1020406080100SE +/- 0.24, N = 3107.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.24, N = 3224.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.1050100150200250SE +/- 2.07, N = 3242.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.10150300450600750SE +/- 8.14, N = 3683.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.104080120160200SE +/- 0.71, N = 3199.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.10150300450600750SE +/- 5.48, N = 3677.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.12, N = 3233.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.10140280420560700SE +/- 1.34, N = 3633.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.104080120160200SE +/- 0.40, N = 3182.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10130260390520650SE +/- 2.64, N = 3624.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestUbuntu 22.1040M80M120M160M200MSE +/- 360555.13, N = 31956000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionUbuntu 22.100.20480.40960.61440.81921.024SE +/- 0.00, N = 30.911. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionUbuntu 22.101.1252.253.3754.55.625SE +/- 0.01, N = 35.001. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessUbuntu 22.100.51751.0351.55252.072.5875SE +/- 0.00, N = 32.301. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Ubuntu 22.1048121620SE +/- 0.19, N = 316.161. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultUbuntu 22.10612182430SE +/- 0.28, N = 324.981. (CC) gcc options: -fvisibility=hidden -O2 -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Ubuntu 22.100.23630.47260.70890.94521.1815SE +/- 0.00, N = 31.051. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Ubuntu 22.100.23850.4770.71550.9541.1925SE +/- 0.00, N = 31.061. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Ubuntu 22.103691215SE +/- 0.01, N = 313.091. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Ubuntu 22.103691215SE +/- 0.02, N = 313.251. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Ubuntu 22.103691215SE +/- 0.01, N = 313.431. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Ubuntu 22.103691215SE +/- 0.01, N = 313.581. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 22.1010002000300040005000SE +/- 9.98, N = 34887.71. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 22.101122334455SE +/- 0.40, N = 350.91. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 22.1010002000300040005000SE +/- 0.18, N = 34758.31. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 22.1020406080100SE +/- 1.02, N = 380.51. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesUbuntu 22.102004006008001000SE +/- 8.08, N = 151063.3MIN: 960.83 / MAX: 1135.87

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 22.1015003000450060007500SE +/- 20.79, N = 37183.4MIN: 5471.82 / MAX: 7209.81

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 22.10400800120016002000SE +/- 26.27, N = 31957.9MIN: 1750.66 / MAX: 2219.24

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 22.10400800120016002000SE +/- 22.11, N = 31992.6MIN: 1796.31 / MAX: 2233.14

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 22.10400800120016002000SE +/- 15.45, N = 91902.7MIN: 1741.39 / MAX: 1997.09

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 22.109001800270036004500SE +/- 48.71, N = 34193.8MIN: 4126.1 / MAX: 6173.56

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 22.10150300450600750SE +/- 1.16, N = 3693.8MIN: 500.78 / MAX: 696.1

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSUbuntu 22.10400800120016002000SE +/- 6.70, N = 32026.4MIN: 1949.41 / MAX: 2109.84

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 22.1016003200480064008000SE +/- 72.83, N = 37650.3MIN: 7526.04 / MAX: 8473.18

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 22.1080160240320400SE +/- 0.53, N = 3384.5MIN: 357.94 / MAX: 465.15

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 22.10100200300400500SE +/- 6.59, N = 15454.3MIN: 344.62 / MAX: 815.12

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansUbuntu 22.10400800120016002000SE +/- 4.66, N = 41689

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapUbuntu 22.10400800120016002000SE +/- 14.59, N = 71638

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonUbuntu 22.10400800120016002000SE +/- 8.38, N = 41710

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorUbuntu 22.10306090120150SE +/- 1.44, N = 6142.80

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedUbuntu 22.1060K120K180K240K300KSE +/- 218.58, N = 32690671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Ubuntu 22.1050K100K150K200K250KSE +/- 133.33, N = 32529331. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MUbuntu 22.104K8K12K16K20KSE +/- 35.72, N = 316463.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MUbuntu 22.102K4K6K8K10KSE +/- 65.64, N = 39652.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationUbuntu 22.1020406080100SE +/- 0.31, N = 3108.62

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldUbuntu 22.104080120160200SE +/- 2.45, N = 3183.75

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestUbuntu 22.101530456075SE +/- 0.98, N = 368.10

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamUbuntu 22.10306090120150SE +/- 0.19, N = 3114.66

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeUbuntu 22.10306090120150151.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeUbuntu 22.1061218243027.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballUbuntu 22.1090018002700360045004268.41. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designUbuntu 22.1061218243025.77

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Ubuntu 22.104812162013.99

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Ubuntu 22.1061218243025.83

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Ubuntu 22.1051015202521.88

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Ubuntu 22.1071421283529.3

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaUbuntu 22.101.15432.30863.46294.61725.77155.13

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinUbuntu 22.102468106.93

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Ubuntu 22.10369121511.07

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowUbuntu 22.1036912159.54

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodUbuntu 22.100.62331.24661.86992.49323.11652.77

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Ubuntu 22.10369121512.1

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkUbuntu 22.100.30150.6030.90451.2061.50751.34

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducUbuntu 22.100.76051.5212.28153.0423.80253.38

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxUbuntu 22.100.67951.3592.03852.7183.39753.02

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airUbuntu 22.100.20930.41860.62790.83721.04650.93

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acUbuntu 22.100.84831.69662.54493.39324.24153.77

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsUbuntu 22.100.13720.27440.41160.54880.686SE +/- 0.00121, N = 30.60982

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterUbuntu 22.10246810SE +/- 0.015, N = 37.2971. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverUbuntu 22.10246810SE +/- 0.053, N = 86.1681. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteUbuntu 22.101122334455SE +/- 0.10, N = 349.981. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DUbuntu 22.101122334455SE +/- 0.32, N = 1549.661. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDUbuntu 22.1020406080100SE +/- 0.14, N = 384.651. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CUbuntu 22.103K6K9K12K15KSE +/- 32.81, N = 315473.901. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BUbuntu 22.105K10K15K20K25KSE +/- 227.34, N = 322786.601. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CUbuntu 22.105K10K15K20K25KSE +/- 295.81, N = 324905.501. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CUbuntu 22.1011K22K33K44K55KSE +/- 218.90, N = 353311.891. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DUbuntu 22.1030060090012001500SE +/- 15.74, N = 41263.161. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DUbuntu 22.107001400210028003500SE +/- 10.67, N = 33049.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CUbuntu 22.107001400210028003500SE +/- 0.66, N = 33262.421. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CUbuntu 22.102K4K6K8K10KSE +/- 24.89, N = 38583.031. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CUbuntu 22.1011K22K33K44K55KSE +/- 33.54, N = 349771.291. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Ubuntu 22.103691215SE +/- 0.02, N = 310.151. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Ubuntu 22.1011002200330044005500SE +/- 69.40, N = 35198.71. (CXX) g++ options: -O3 -march=native -rdynamic

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltimateUbuntu 22.10110220330440550SE +/- 1.29, N = 3527.86MIN: 98 / MAX: 1077

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltimateUbuntu 22.10120240360480600SE +/- 0.59, N = 3540.60MIN: 101 / MAX: 1094

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 2.13, N = 3692.06MIN: 411 / MAX: 1142

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 1.76, N = 3696.93MIN: 375 / MAX: 1188

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 3840 x 2160Ubuntu 22.102004006008001000SE +/- 5.47, N = 3951.3

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Ubuntu 22.102004006008001000SE +/- 1.51, N = 3965.8

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10140280420560700SE +/- 0.21, N = 3664.7

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10140280420560700SE +/- 1.58, N = 3671.3

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: HighUbuntu 22.10140280420560700SE +/- 6.27, N = 3665.9

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighUbuntu 22.10150300450600750SE +/- 2.63, N = 3683.9

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 2160Ubuntu 22.102004006008001000SE +/- 3.57, N = 3896.02

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1080Ubuntu 22.102004006008001000SE +/- 0.56, N = 3999.44

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. Vulkan or OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: MulteasymapUbuntu 22.106001200180024003000SE +/- 3.45, N = 32994.511. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: MulteasymapUbuntu 22.1013002600390052006500SE +/- 5.38, N = 35913.121. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2Ubuntu 22.105001000150020002500SE +/- 2.70, N = 32159.591. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2Ubuntu 22.1010002000300040005000SE +/- 20.98, N = 34696.931. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

Connections: 20

Ubuntu 22.10: The test quit with a non-zero exit status.

Connections: 1

Ubuntu 22.10: The test quit with a non-zero exit status.

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingUbuntu 22.100.00090.00180.00270.00360.0045SE +/- 0.000, N = 150.004

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateUbuntu 22.100.00020.00040.00060.00080.001SE +/- 0.000, N = 150.001

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityUbuntu 22.105K10K15K20K25KSE +/- 568.16, N = 1224287.071. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicUbuntu 22.1070K140K210K280K350KSE +/- 7871.09, N = 15344361.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 65972.61, N = 153124943.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 512 - Model: ResNet-50

Ubuntu 22.10: The test quit with a non-zero exit status.

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

Scale: 26

Ubuntu 22.10: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node phoronix-System-Product-Name exited on signal 9 (Killed).

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeUbuntu 22.100.21380.42760.64140.85521.069SE +/- 0.02, N = 150.95

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeUbuntu 22.100.20930.41860.62790.83721.0465SE +/- 0.03, N = 30.93

Node.js Octane Benchmark

A Node.js version of the JavaScript Octane Benchmark. Learn more via the OpenBenchmarking.org test page.

Ubuntu 22.10: The test quit with a non-zero exit status. E: ReferenceError: GLOBAL is not defined

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUUbuntu 22.100.33590.67181.00771.34361.6795SE +/- 0.090916, N = 151.492836MIN: 0.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 3.19, N = 12142.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

Ubuntu 22.10: The test quit with a non-zero exit status.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 22.10400800120016002000SE +/- 34.76, N = 202091

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CUbuntu 22.105K10K15K20K25KSE +/- 373.18, N = 1522382.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

364 Results Shown

DDraceNetwork:
  1920 x 1080 - Fullscreen - Vulkan - Default - RaiNyMore2 - Total Frame Time
  3840 x 2160 - Fullscreen - Vulkan - Default - Multeasymap - Total Frame Time
  1920 x 1080 - Fullscreen - Vulkan - Default - Multeasymap - Total Frame Time
  3840 x 2160 - Fullscreen - Vulkan - Default - RaiNyMore2 - Total Frame Time
nginx:
  1000
  500
  200
  100
CloudSuite In-Memory Analytics
CloudSuite Graph Analytics
Chaos Group V-RAY
EnCodec:
  1.5 kbps
  24 kbps
  6 kbps
  3 kbps
PHPBench
Appleseed:
  Material Tester
  Disney Material
  Emily
ONNX Runtime:
  super-resolution-10 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
  fcn-resnet101-11 - CPU - Standard
  bertsquad-12 - CPU - Standard
  yolov4 - CPU - Standard
  GPT-2 - CPU - Standard
Natron
PyPerformance:
  pickle_pure_python
  django_template
  python_startup
  regex_compile
  crypto_pyaes
  json_loads
  raytrace
  pathlib
  nbody
  float
  chaos
  2to3
  go
PyBench
IndigoBench:
  CPU - Supercar
  CPU - Bedroom
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
ctx_clock
Blender:
  Pabellon Barcelona - CPU-Only
  Barbershop - CPU-Only
  Fishy Cat - CPU-Only
  Classroom - CPU-Only
  BMW27 - CPU-Only
spaCy:
  en_core_web_trf
  en_core_web_lg
Stress-NG:
  System V Message Passing
  Glibc Qsort Data Sorting
  Glibc C String Functions
  Context Switching
  Memory Copying
  x86_64 RdRand
  Vector Math
  Matrix Math
  Semaphores
  CPU Stress
  CPU Cache
  SENDFILE
  IO_uring
  Forking
  Malloc
  Crypto
  Mutex
  MEMFD
  Futex
  NUMA
  MMAP
memtier_benchmark:
  Redis - 50 - 1:10
  Redis - 50 - 10:1
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
RawTherapee
SQLite Speedtest
TensorFlow:
  CPU - 512 - GoogLeNet
  CPU - 256 - ResNet-50
  CPU - 256 - GoogLeNet
  CPU - 64 - ResNet-50
  CPU - 64 - GoogLeNet
  CPU - 32 - ResNet-50
  CPU - 32 - GoogLeNet
  CPU - 16 - ResNet-50
  CPU - 16 - GoogLeNet
  CPU - 512 - AlexNet
  CPU - 256 - AlexNet
  CPU - 64 - AlexNet
  CPU - 32 - AlexNet
  CPU - 16 - AlexNet
HammerDB - MariaDB:
  64 - 250:
    Transactions Per Minute
    New Orders Per Minute
  64 - 100:
    Transactions Per Minute
    New Orders Per Minute
  32 - 250:
    Transactions Per Minute
    New Orders Per Minute
  32 - 100:
    Transactions Per Minute
    New Orders Per Minute
  16 - 250:
    Transactions Per Minute
    New Orders Per Minute
  16 - 100:
    Transactions Per Minute
    New Orders Per Minute
  8 - 250:
    Transactions Per Minute
    New Orders Per Minute
  8 - 100:
    Transactions Per Minute
    New Orders Per Minute
GROMACS
FinanceBench:
  Bonds OpenMP
  Repo OpenMP
Apache Spark:
  1000000 - 500 - Broadcast Inner Join Test Time
  1000000 - 500 - Repartition Test Time
  1000000 - 500 - Group By Test Time
  1000000 - 500 - Calculate Pi Benchmark Using Dataframe
  1000000 - 500 - Calculate Pi Benchmark
  1000000 - 500 - SHA-512 Benchmark Time
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Calculate Pi Benchmark
  1000000 - 100 - SHA-512 Benchmark Time
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
Liquid-DSP
Node.js V8 Web Tooling Benchmark
OpenSSL:
  RSA4096:
    verify/s
    sign/s
  SHA256:
    byte/s
Cpuminer-Opt:
  Triple SHA-256, Onecoin
  Quad SHA-256, Pyrite
  LBC, LBRY Credits
  Myriad-Groestl
  Skeincoin
  Garlicoin
  Blake-2 S
  Ringcoin
  Deepcoin
  scrypt
  x25x
  Magi
FFmpeg:
  libx265 - Video On Demand:
    FPS
    Seconds
  libx264 - Video On Demand:
    FPS
    Seconds
  libx265 - Platform:
    FPS
    Seconds
  libx264 - Platform:
    FPS
    Seconds
  libx265 - Upload:
    FPS
    Seconds
  libx264 - Upload:
    FPS
    Seconds
  libx265 - Live:
    FPS
    Seconds
  libx264 - Live:
    FPS
    Seconds
Timed Wasmer Compilation
OSPRay Studio:
  3 - 1080p - 32 - Path Tracer
  1 - 1080p - 32 - Path Tracer
  3 - 1080p - 1 - Path Tracer
  1 - 1080p - 1 - Path Tracer
  3 - 4K - 32 - Path Tracer
  1 - 4K - 32 - Path Tracer
  3 - 4K - 1 - Path Tracer
  1 - 4K - 1 - Path Tracer
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  IP Shapes 3D - f32 - CPU
  IP Shapes 1D - f32 - CPU
Timed CPython Compilation:
  Released Build, PGO + LTO Optimized
  Default
Timed Node.js Compilation
Timed Linux Kernel Compilation:
  allmodconfig
  defconfig
Stargate Digital Audio Workstation:
  96000 - 1024
  44100 - 1024
  96000 - 512
  44100 - 512
7-Zip Compression:
  Decompression Rating
  Compression Rating
OpenVKL
Intel Open Image Denoise
SVT-VP9:
  Visual Quality Optimized - Bosphorus 4K
  PSNR/SSIM Optimized - Bosphorus 4K
SVT-HEVC:
  10 - Bosphorus 4K
  7 - Bosphorus 4K
  1 - Bosphorus 4K
SVT-AV1:
  Preset 12 - Bosphorus 4K
  Preset 10 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 4 - Bosphorus 4K
AOM AV1:
  Speed 10 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
Node.js Express HTTP Load Test
LibRaw
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
  4G PHY_DL_Test 100 PRB SISO 256-QAM:
    UE Mb/s
    eNb Mb/s
  4G PHY_DL_Test 100 PRB MIMO 256-QAM:
    UE Mb/s
    eNb Mb/s
  4G PHY_DL_Test 100 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
  4G PHY_DL_Test 100 PRB MIMO 64-QAM:
    UE Mb/s
    eNb Mb/s
  OFDM_Test:
    Samples / Second
WebP Image Encode:
  Quality 100, Lossless, Highest Compression
  Quality 100, Highest Compression
  Quality 100, Lossless
  Quality 100
  Default
JPEG XL libjxl:
  JPEG - 100
  PNG - 100
  JPEG - 90
  JPEG - 80
  PNG - 90
  PNG - 80
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
  19 - Decompression Speed
  19 - Compression Speed
Renaissance:
  Genetic Algorithm Using Jenetics + Futures
  Akka Unbalanced Cobwebbed Tree
  In-Memory Database Shootout
  Finagle HTTP Requests
  Apache Spark PageRank
  Savina Reactors.IO
  Apache Spark Bayes
  Apache Spark ALS
  ALS Movie Lens
  Rand Forest
  Scala Dotty
DaCapo Benchmark:
  Tradebeans
  Tradesoap
  Jython
Java Gradle Build
Chia Blockchain VDF:
  Square Assembly Optimized
  Square Plain C++
Xmrig:
  Wownero - 1M
  Monero - 1M
OpenRadioss:
  Rubber O-Ring Seal Installation
  Bird Strike on Windshield
  Cell Phone Drop Test
  Bumper Beam
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
NWChem
Polyhedron Fortran Benchmarks:
  mp_prop_design
  test_fpu2
  gas_dyn2
  fatigue2
  channel2
  capacita
  protein
  induct2
  rnflow
  aermod
  tfft2
  linpk
  doduc
  mdbx
  air
  ac
NAMD
Rodinia:
  OpenMP Streamcluster
  OpenMP CFD Solver
  OpenMP Leukocyte
  OpenMP HotSpot3D
  OpenMP LavaMD
NAS Parallel Benchmarks:
  SP.C
  SP.B
  MG.C
  LU.C
  IS.D
  EP.D
  EP.C
  CG.C
  BT.C
High Performance Conjugate Gradient
QuantLib
Xonotic:
  3840 x 2160 - Ultimate
  1920 x 1080 - Ultimate
  3840 x 2160 - Ultra
  1920 x 1080 - Ultra
Warsow:
  3840 x 2160
  1920 x 1080
Unvanquished:
  3840 x 2160 - Ultra
  1920 x 1080 - Ultra
  3840 x 2160 - High
  1920 x 1080 - High
Tesseract:
  3840 x 2160
  1920 x 1080
DDraceNetwork:
  3840 x 2160 - Fullscreen - Vulkan - Default - Multeasymap
  1920 x 1080 - Fullscreen - Vulkan - Default - Multeasymap
  3840 x 2160 - Fullscreen - Vulkan - Default - RaiNyMore2
  1920 x 1080 - Fullscreen - Vulkan - Default - RaiNyMore2
PyHPC Benchmarks:
  CPU - Numpy - 16384 - Isoneutral Mixing
  CPU - Numpy - 16384 - Equation of State
Stress-NG:
  Socket Activity
  Atomic
memtier_benchmark
Apache Spark:
  1000000 - 500 - Inner Join Test Time
  1000000 - 100 - Inner Join Test Time
oneDNN
SVT-VP9
DaCapo Benchmark
NAS Parallel Benchmarks