Core i9 13900K Linux Distros

Intel Core i9-13900K testing with a ASUS PRIME Z790-P WIFI (0602 BIOS) and AMD Radeon RX 6800 XT 16GB on Ubuntu 22.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211049-NE-COREI913914
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Ubuntu 22.10
November 02 2022
  1 Day, 1 Hour, 48 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 13900K Linux DistrosOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB1000GB Western Digital WDS100T1X0E-00AFY0AMD Radeon RX 6800 XT 16GB (2575/1000MHz)Realtek ALC897ASUS VP28URealtek RTL8125 2.5GbE + Intel Device 7a70Ubuntu 22.105.19.0-23-generic (x86_64)GNOME Shell 43.0X Server + Wayland4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)1.3.224GCC 12.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionCore I9 13900K Linux Distros PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x10e - Thermald 2.5.1 - BAR1 / Visible vRAM Size: 16368 MB - vBIOS Version: 113-D4120500-101- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu1)- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Core i9 13900K Linux Distrosddnet: 1920 x 1080 - Fullscreen - Vulkan - Default - RaiNyMore2ddnet: 3840 x 2160 - Fullscreen - Vulkan - Default - RaiNyMore2ddnet: 1920 x 1080 - Fullscreen - Vulkan - Default - Multeasymapddnet: 3840 x 2160 - Fullscreen - Vulkan - Default - Multeasymaptesseract: 1920 x 1080tesseract: 3840 x 2160unvanquished: 1920 x 1080 - Highunvanquished: 3840 x 2160 - Highunvanquished: 1920 x 1080 - Ultraunvanquished: 3840 x 2160 - Ultrawarsow: 1920 x 1080warsow: 3840 x 2160xonotic: 1920 x 1080 - Ultraxonotic: 3840 x 2160 - Ultraxonotic: 1920 x 1080 - Ultimatexonotic: 3840 x 2160 - Ultimatequantlib: hpcg: npb: BT.Cnpb: CG.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Bnpb: SP.Crodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomspolyhedron: acpolyhedron: airpolyhedron: mdbxpolyhedron: doducpolyhedron: linpkpolyhedron: tfft2polyhedron: aermodpolyhedron: rnflowpolyhedron: induct2polyhedron: proteinpolyhedron: capacitapolyhedron: channel2polyhedron: fatigue2polyhedron: gas_dyn2polyhedron: test_fpu2polyhedron: mp_prop_designnwchem: C240 Buckyballopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testopenradioss: Bird Strike on Windshieldopenradioss: Rubber O-Ring Seal Installationxmrig: Monero - 1Mxmrig: Wownero - 1Mchia-vdf: Square Plain C++chia-vdf: Square Assembly Optimizedjava-gradle-perf: Reactordacapobench: H2dacapobench: Jythondacapobench: Tradesoapdacapobench: Tradebeansrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futurescompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedjpegxl: PNG - 80jpegxl: PNG - 90jpegxl: JPEG - 80jpegxl: JPEG - 90jpegxl: PNG - 100jpegxl: JPEG - 100webp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionsrsran: OFDM_Testsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMlibraw: Post-Processing Benchmarknode-express-loadtest: aom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-hevc: 1 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Ksvt-vp9: VMAF Optimized - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 4Koidn: RT.ldr_alb_nrm.3840x2160openvkl: vklBenchmark ISPCcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingstargate: 44100 - 512stargate: 96000 - 512stargate: 44100 - 1024stargate: 96000 - 1024build-linux-kernel: defconfigbuild-linux-kernel: allmodconfigbuild-nodejs: Time To Compilebuild-python: Defaultbuild-python: Released Build, PGO + LTO Optimizedonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 3 - 4K - 1 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerospray-studio: 3 - 4K - 32 - Path Tracerospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 32 - Path Tracerospray-studio: 3 - 1080p - 32 - Path Tracerbuild-wasmer: Time To Compileffmpeg: libx264 - Liveffmpeg: libx264 - Liveffmpeg: libx265 - Liveffmpeg: libx265 - Liveffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadffmpeg: libx264 - Platformffmpeg: libx264 - Platformffmpeg: libx265 - Platformffmpeg: libx265 - Platformffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandcpuminer-opt: Magicpuminer-opt: x25xcpuminer-opt: scryptcpuminer-opt: Deepcoincpuminer-opt: Ringcoincpuminer-opt: Blake-2 Scpuminer-opt: Garlicoincpuminer-opt: Skeincoincpuminer-opt: Myriad-Groestlcpuminer-opt: LBC, LBRY Creditscpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Triple SHA-256, Onecoinopenssl: SHA256openssl: RSA4096openssl: RSA4096node-web-tooling: liquid-dsp: 8 - 256 - 57clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, Third Runspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timefinancebench: Repo OpenMPfinancebench: Bonds OpenMPgromacs: MPI CPU - water_GMX50_barehammerdb-mariadb: 8 - 100hammerdb-mariadb: 8 - 100hammerdb-mariadb: 8 - 250hammerdb-mariadb: 8 - 250hammerdb-mariadb: 16 - 100hammerdb-mariadb: 16 - 100hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 250hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 250hammerdb-mariadb: 32 - 250hammerdb-mariadb: 64 - 100hammerdb-mariadb: 64 - 100hammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 250tensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNetsqlite-speedtest: Timed Time - Size 1,000rawtherapee: Total Benchmark Timedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streammemtier-benchmark: Redis - 50 - 1:1memtier-benchmark: Redis - 50 - 10:1memtier-benchmark: Redis - 50 - 1:10stress-ng: MMAPstress-ng: NUMAstress-ng: Futexstress-ng: MEMFDstress-ng: Mutexstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Forkingstress-ng: IO_uringstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: x86_64 RdRandstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingspacy: en_core_web_lgspacy: en_core_web_trfblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyctx-clock: Context Switch Timeopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUindigobench: CPU - Bedroomindigobench: CPU - Supercarpybench: Total For Average Test Timespyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonnatron: Spaceshiponnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardappleseed: Emilyappleseed: Disney Materialappleseed: Material Testerphpbench: PHP Benchmark Suiteencodec: 3 kbpsencodec: 6 kbpsencodec: 24 kbpsencodec: 1.5 kbpspyhpc: CPU - Numpy - 16384 - Equation of Statepyhpc: CPU - Numpy - 16384 - Isoneutral Mixingv-ray: CPUcloudsuite-ga: cloudsuite-ma: nginx: 100nginx: 200nginx: 500nginx: 1000Ubuntu 22.104696.932159.595913.122994.51999.4426896.0168683.9665.9671.3664.7965.8951.3696.9289392692.0609456540.6037083527.85715445198.710.154949771.298583.033262.423049.5322382.811263.1653311.8924905.5022786.6015473.9084.64549.66049.9766.1687.2970.609823.770.933.023.381.3412.12.779.5411.076.935.1329.321.8825.8313.9925.774268.427.343094151.34484114.6668.10183.75108.629652.516463.2252933269067142.7982091171016381689454.3384.57650.32026.4693.84193.81902.71992.61957.97183.41063.380.54758.350.94887.713.5813.4313.2513.091.061.0524.9816.162.305.000.91195600000624.9182.0633.1233.3677.5199.2683.7242.6224.3107.670.321814744.8766.0585.3486.523.00477.256147.867216.5175.79105.39202.12142.19155.19123.490.571611821531399816.1453534.8184256.4228354.85469541.396454.880256.70112.029171.5021.904304.079235.772287.631323.443472150.831112.821.4928364831575815718518757912421473390374649530.24514.30353.2427.74182.08129.61913828719.4879.0031.9699.2976.29116.3065.1499.2576.32116.3265.121176.021128.00333.58185205423.947655873496.701563431713052550198680434460359569992335496.9358806.726.33859746667301.66307.65308.971.9951.763.452.631.020.930.772.0352.243.262.441.090.950.7919315.82942730942.0071621.41438236891213103872163371828627737140865413800888315356828286139063907683844089332162.47206.94235.17256.55264.59117.3939.70113.1638.51111.2037.68109.1037.06109.3432.80632.34612.0497967.893311.104790.049550.3953236.601532.182231.070674.6860159.940956.832917.5900153.127378.235996.586010.3497103.8966115.349280.028312.492552.5440227.141440.323224.797012.1856964.435511.051890.48093124943.723127334.753456218.18742.41681.803538590.312049.3716748823.88344361.7542378.7936241645.59113514.4327676.33588014.9498.7751634.543538392.41109789.42119832.0382767.767385.3924287.0714703175.324307014.94411.3913432520.5120855252351.61147.6475.32576.35178.951325.081570.463.582222.923.552238.02372.8521.4318.23438.23916.058.72468.5051.1463.48125.971638.9914.63728.9110.9614593.711.6433018.170.724.05111.80247411215844.946.761.48.7720811.550.782.17.3922.21976.71057868712101335986842164.93999684.85954590.775778161759619.18219.30321.85518.5480.0010.00428734998510160204910.46205841.24203069.78192021.95OpenBenchmarking.org

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. Vulkan or OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2Ubuntu 22.1010002000300040005000SE +/- 20.98, N = 34696.931. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2 - Total Frame TimeUbuntu 22.10246810Min: 0.03 / Avg: 0.21 / Max: 1.391. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2Ubuntu 22.105001000150020002500SE +/- 2.70, N = 32159.591. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2 - Total Frame TimeUbuntu 22.10246810Min: 0.03 / Avg: 0.75 / Max: 3.041. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: MulteasymapUbuntu 22.1013002600390052006500SE +/- 5.38, N = 35913.121. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap - Total Frame TimeUbuntu 22.10246810Min: 0.08 / Avg: 0.17 / Max: 1.231. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: MulteasymapUbuntu 22.106001200180024003000SE +/- 3.45, N = 32994.511. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap - Total Frame TimeUbuntu 22.10246810Min: 0.09 / Avg: 0.34 / Max: 1.471. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

Tesseract

Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1080Ubuntu 22.102004006008001000SE +/- 0.56, N = 3999.44

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 2160Ubuntu 22.102004006008001000SE +/- 3.57, N = 3896.02

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighUbuntu 22.10150300450600750SE +/- 2.63, N = 3683.9

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: HighUbuntu 22.10140280420560700SE +/- 6.27, N = 3665.9

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10140280420560700SE +/- 1.58, N = 3671.3

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10140280420560700SE +/- 0.21, N = 3664.7

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Ubuntu 22.102004006008001000SE +/- 1.51, N = 3965.8

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 3840 x 2160Ubuntu 22.102004006008001000SE +/- 5.47, N = 3951.3

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 1.76, N = 3696.93MIN: 375 / MAX: 1188

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 2.13, N = 3692.06MIN: 411 / MAX: 1142

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltimateUbuntu 22.10120240360480600SE +/- 0.59, N = 3540.60MIN: 101 / MAX: 1094

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltimateUbuntu 22.10110220330440550SE +/- 1.29, N = 3527.86MIN: 98 / MAX: 1077

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Ubuntu 22.1011002200330044005500SE +/- 69.40, N = 35198.71. (CXX) g++ options: -O3 -march=native -rdynamic

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Ubuntu 22.103691215SE +/- 0.02, N = 310.151. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CUbuntu 22.1011K22K33K44K55KSE +/- 33.54, N = 349771.291. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CUbuntu 22.102K4K6K8K10KSE +/- 24.89, N = 38583.031. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CUbuntu 22.107001400210028003500SE +/- 0.66, N = 33262.421. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DUbuntu 22.107001400210028003500SE +/- 10.67, N = 33049.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CUbuntu 22.105K10K15K20K25KSE +/- 373.18, N = 1522382.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DUbuntu 22.1030060090012001500SE +/- 15.74, N = 41263.161. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CUbuntu 22.1011K22K33K44K55KSE +/- 218.90, N = 353311.891. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CUbuntu 22.105K10K15K20K25KSE +/- 295.81, N = 324905.501. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BUbuntu 22.105K10K15K20K25KSE +/- 227.34, N = 322786.601. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CUbuntu 22.103K6K9K12K15KSE +/- 32.81, N = 315473.901. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDUbuntu 22.1020406080100SE +/- 0.14, N = 384.651. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DUbuntu 22.101122334455SE +/- 0.32, N = 1549.661. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteUbuntu 22.101122334455SE +/- 0.10, N = 349.981. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverUbuntu 22.10246810SE +/- 0.053, N = 86.1681. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterUbuntu 22.10246810SE +/- 0.015, N = 37.2971. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsUbuntu 22.100.13720.27440.41160.54880.686SE +/- 0.00121, N = 30.60982

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acUbuntu 22.100.84831.69662.54493.39324.24153.77

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airUbuntu 22.100.20930.41860.62790.83721.04650.93

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxUbuntu 22.100.67951.3592.03852.7183.39753.02

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducUbuntu 22.100.76051.5212.28153.0423.80253.38

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkUbuntu 22.100.30150.6030.90451.2061.50751.34

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Ubuntu 22.10369121512.1

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodUbuntu 22.100.62331.24661.86992.49323.11652.77

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowUbuntu 22.1036912159.54

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Ubuntu 22.10369121511.07

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinUbuntu 22.102468106.93

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaUbuntu 22.101.15432.30863.46294.61725.77155.13

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Ubuntu 22.1071421283529.3

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Ubuntu 22.1051015202521.88

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Ubuntu 22.1061218243025.83

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Ubuntu 22.104812162013.99

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designUbuntu 22.1061218243025.77

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballUbuntu 22.1090018002700360045004268.41. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeUbuntu 22.1061218243027.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeUbuntu 22.10306090120150151.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamUbuntu 22.10306090120150SE +/- 0.19, N = 3114.66

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestUbuntu 22.101530456075SE +/- 0.98, N = 368.10

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldUbuntu 22.104080120160200SE +/- 2.45, N = 3183.75

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationUbuntu 22.1020406080100SE +/- 0.31, N = 3108.62

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MUbuntu 22.102K4K6K8K10KSE +/- 65.64, N = 39652.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MUbuntu 22.104K8K12K16K20KSE +/- 35.72, N = 316463.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Ubuntu 22.1050K100K150K200K250KSE +/- 133.33, N = 32529331. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedUbuntu 22.1060K120K180K240K300KSE +/- 218.58, N = 32690671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorUbuntu 22.10306090120150SE +/- 1.44, N = 6142.80

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 22.10400800120016002000SE +/- 34.76, N = 202091

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonUbuntu 22.10400800120016002000SE +/- 8.38, N = 41710

Java Test: Eclipse

Ubuntu 22.10: The test quit with a non-zero exit status.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapUbuntu 22.10400800120016002000SE +/- 14.59, N = 71638

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansUbuntu 22.10400800120016002000SE +/- 4.66, N = 41689

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 22.10100200300400500SE +/- 6.59, N = 15454.3MIN: 344.62 / MAX: 815.12

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 22.1080160240320400SE +/- 0.53, N = 3384.5MIN: 357.94 / MAX: 465.15

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 22.1016003200480064008000SE +/- 72.83, N = 37650.3MIN: 7526.04 / MAX: 8473.18

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSUbuntu 22.10400800120016002000SE +/- 6.70, N = 32026.4MIN: 1949.41 / MAX: 2109.84

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 22.10150300450600750SE +/- 1.16, N = 3693.8MIN: 500.78 / MAX: 696.1

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 22.109001800270036004500SE +/- 48.71, N = 34193.8MIN: 4126.1 / MAX: 6173.56

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 22.10400800120016002000SE +/- 15.45, N = 91902.7MIN: 1741.39 / MAX: 1997.09

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 22.10400800120016002000SE +/- 22.11, N = 31992.6MIN: 1796.31 / MAX: 2233.14

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 22.10400800120016002000SE +/- 26.27, N = 31957.9MIN: 1750.66 / MAX: 2219.24

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 22.1015003000450060007500SE +/- 20.79, N = 37183.4MIN: 5471.82 / MAX: 7209.81

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesUbuntu 22.102004006008001000SE +/- 8.08, N = 151063.3MIN: 960.83 / MAX: 1135.87

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 22.1020406080100SE +/- 1.02, N = 380.51. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 22.1010002000300040005000SE +/- 0.18, N = 34758.31. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 22.101122334455SE +/- 0.40, N = 350.91. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 22.1010002000300040005000SE +/- 9.98, N = 34887.71. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Ubuntu 22.103691215SE +/- 0.01, N = 313.581. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Ubuntu 22.103691215SE +/- 0.01, N = 313.431. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Ubuntu 22.103691215SE +/- 0.02, N = 313.251. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Ubuntu 22.103691215SE +/- 0.01, N = 313.091. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Ubuntu 22.100.23850.4770.71550.9541.1925SE +/- 0.00, N = 31.061. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Ubuntu 22.100.23630.47260.70890.94521.1815SE +/- 0.00, N = 31.051. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultUbuntu 22.10612182430SE +/- 0.28, N = 324.981. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Ubuntu 22.1048121620SE +/- 0.19, N = 316.161. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessUbuntu 22.100.51751.0351.55252.072.5875SE +/- 0.00, N = 32.301. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionUbuntu 22.101.1252.253.3754.55.625SE +/- 0.01, N = 35.001. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionUbuntu 22.100.20480.40960.61440.81921.024SE +/- 0.00, N = 30.911. (CC) gcc options: -fvisibility=hidden -O2 -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestUbuntu 22.1040M80M120M160M200MSE +/- 360555.13, N = 31956000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10130260390520650SE +/- 2.64, N = 3624.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.104080120160200SE +/- 0.40, N = 3182.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.10140280420560700SE +/- 1.34, N = 3633.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.12, N = 3233.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.10150300450600750SE +/- 5.48, N = 3677.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.104080120160200SE +/- 0.71, N = 3199.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.10150300450600750SE +/- 8.14, N = 3683.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.1050100150200250SE +/- 2.07, N = 3242.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.24, N = 3224.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1020406080100SE +/- 0.24, N = 3107.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkUbuntu 22.101632486480SE +/- 0.39, N = 370.321. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestUbuntu 22.104K8K12K16K20KSE +/- 43.21, N = 318147

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KUbuntu 22.101020304050SE +/- 0.13, N = 344.871. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KUbuntu 22.101530456075SE +/- 0.50, N = 366.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.15, N = 385.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.10, N = 386.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 22.100.67591.35182.02772.70363.3795SE +/- 0.026, N = 33.0041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.77, N = 377.261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 1.70, N = 3147.871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KUbuntu 22.1050100150200250SE +/- 2.00, N = 3216.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KUbuntu 22.101.30282.60563.90845.21126.514SE +/- 0.02, N = 35.791. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 1.00, N = 3105.391. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KUbuntu 22.104080120160200SE +/- 1.60, N = 15202.121. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 3.19, N = 12142.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 0.68, N = 3155.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 0.42, N = 3123.491. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Ubuntu 22.100.12830.25660.38490.51320.6415SE +/- 0.00, N = 30.57

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCUbuntu 22.104080120160200SE +/- 0.67, N = 3161MIN: 11 / MAX: 1931

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingUbuntu 22.1040K80K120K160K200KSE +/- 1518.09, N = 31821531. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingUbuntu 22.1030K60K90K120K150KSE +/- 1890.40, N = 31399811. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Ubuntu 22.10246810SE +/- 0.006537, N = 36.1453531. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Ubuntu 22.101.08412.16823.25234.33645.4205SE +/- 0.002423, N = 34.8184251. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Ubuntu 22.10246810SE +/- 0.011125, N = 36.4228351. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Ubuntu 22.101.09232.18463.27694.36925.4615SE +/- 0.001939, N = 34.8546951. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigUbuntu 22.10918273645SE +/- 0.39, N = 341.40

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigUbuntu 22.10100200300400500SE +/- 0.43, N = 3454.88

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileUbuntu 22.1060120180240300SE +/- 0.04, N = 3256.70

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultUbuntu 22.10369121512.03

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedUbuntu 22.104080120160200171.50

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUUbuntu 22.100.42850.8571.28551.7142.1425SE +/- 0.01885, N = 151.90430MIN: 1.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUUbuntu 22.100.91781.83562.75343.67124.589SE +/- 0.02741, N = 34.07923MIN: 4.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUUbuntu 22.101.29882.59763.89645.19526.494SE +/- 0.00365, N = 35.77228MIN: 5.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUUbuntu 22.10246810SE +/- 0.10591, N = 157.63132MIN: 2.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUUbuntu 22.100.77481.54962.32443.09923.874SE +/- 0.00262, N = 33.44347MIN: 3.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUUbuntu 22.105001000150020002500SE +/- 21.59, N = 32150.83MIN: 1989.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUUbuntu 22.102004006008001000SE +/- 1.98, N = 31112.82MIN: 1021.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUUbuntu 22.100.33590.67181.00771.34361.6795SE +/- 0.090916, N = 151.492836MIN: 0.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1010002000300040005000SE +/- 6.89, N = 348311. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1012002400360048006000SE +/- 5.24, N = 357581. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KSE +/- 533.75, N = 31571851. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1040K80K120K160K200KSE +/- 169.22, N = 31875791. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 2.73, N = 312421. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 4.33, N = 314731. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.108K16K24K32K40KSE +/- 48.68, N = 3390371. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1010K20K30K40K50KSE +/- 33.28, N = 3464951. (CXX) g++ options: -O3 -lm -ldl

Node.js Octane Benchmark

A Node.js version of the JavaScript Octane Benchmark. Learn more via the OpenBenchmarking.org test page.

Ubuntu 22.10: The test quit with a non-zero exit status. E: ReferenceError: GLOBAL is not defined

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileUbuntu 22.10714212835SE +/- 0.20, N = 330.251. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.1048121620SE +/- 0.02, N = 314.301. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.1080160240320400SE +/- 0.46, N = 3353.241. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.10714212835SE +/- 0.06, N = 327.741. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.104080120160200SE +/- 0.41, N = 3182.081. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10306090120150SE +/- 0.15, N = 3129.621. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10510152025SE +/- 0.02, N = 319.481. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.1020406080100SE +/- 0.09, N = 379.001. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.10714212835SE +/- 0.04, N = 331.961. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.1020406080100SE +/- 0.04, N = 399.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.1020406080100SE +/- 0.03, N = 376.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.10306090120150SE +/- 0.10, N = 3116.301. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.101530456075SE +/- 0.06, N = 365.141. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.1020406080100SE +/- 0.09, N = 399.251. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.1020406080100SE +/- 0.07, N = 376.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.10306090120150SE +/- 0.14, N = 3116.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.101530456075SE +/- 0.08, N = 365.121. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiUbuntu 22.1030060090012001500SE +/- 4.42, N = 31176.021. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xUbuntu 22.102004006008001000SE +/- 4.19, N = 31128.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptUbuntu 22.1070140210280350SE +/- 3.47, N = 3333.581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinUbuntu 22.104K8K12K16K20KSE +/- 26.46, N = 3185201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinUbuntu 22.1012002400360048006000SE +/- 21.91, N = 35423.941. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SUbuntu 22.10160K320K480K640K800KSE +/- 9180.18, N = 37655871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinUbuntu 22.107001400210028003500SE +/- 37.34, N = 33496.701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinUbuntu 22.1030K60K90K120K150KSE +/- 1874.06, N = 41563431. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlUbuntu 22.104K8K12K16K20KSE +/- 98.66, N = 3171301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsUbuntu 22.1011K22K33K44K55KSE +/- 141.89, N = 3525501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteUbuntu 22.1040K80K120K160K200KSE +/- 120.14, N = 31986801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinUbuntu 22.1090K180K270K360K450KSE +/- 3729.11, N = 34344601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Ubuntu 22.108000M16000M24000M32000M40000MSE +/- 90907848.90, N = 3359569992331. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.1012002400360048006000SE +/- 5.03, N = 35496.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.1080K160K240K320K400KSE +/- 142.92, N = 3358806.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkUbuntu 22.10612182430SE +/- 0.17, N = 326.33

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ubuntu 22.10200M400M600M800M1000MSE +/- 10982623.14, N = 38597466671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheUbuntu 22.1070140210280350SE +/- 2.14, N = 15301.66MIN: 24.67 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunUbuntu 22.1070140210280350SE +/- 1.50, N = 15307.65MIN: 24.43 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunUbuntu 22.1070140210280350SE +/- 1.54, N = 15308.97MIN: 24.68 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeUbuntu 22.100.44780.89561.34341.79122.239SE +/- 0.01, N = 31.99

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.18, N = 351.76

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.77631.55262.32893.10523.8815SE +/- 0.03, N = 33.45

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeUbuntu 22.100.59181.18361.77542.36722.959SE +/- 0.02, N = 32.63

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeUbuntu 22.100.22950.4590.68850.9181.1475SE +/- 0.01, N = 31.02

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeUbuntu 22.100.20930.41860.62790.83721.0465SE +/- 0.03, N = 30.93

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeUbuntu 22.100.17330.34660.51990.69320.8665SE +/- 0.02, N = 30.77

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 22.100.45680.91361.37041.82722.284SE +/- 0.02, N = 152.03

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.06, N = 1552.24

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.73351.4672.20052.9343.6675SE +/- 0.04, N = 153.26

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeUbuntu 22.100.5491.0981.6472.1962.745SE +/- 0.01, N = 152.44

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeUbuntu 22.100.24530.49060.73590.98121.2265SE +/- 0.01, N = 151.09

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeUbuntu 22.100.21380.42760.64140.85521.069SE +/- 0.02, N = 150.95

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeUbuntu 22.100.17780.35560.53340.71120.889SE +/- 0.01, N = 150.79

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPUbuntu 22.104K8K12K16K20KSE +/- 17.57, N = 319315.831. (CXX) g++ options: -O3 -march=native -fopenmp

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPUbuntu 22.107K14K21K28K35KSE +/- 28.94, N = 330942.011. (CXX) g++ options: -O3 -march=native -fopenmp

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

Scale: 26

Ubuntu 22.10: The test quit with a non-zero exit status. E: mpirun noticed that process rank 2 with PID 0 on node phoronix-System-Product-Name exited on signal 9 (Killed).

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareUbuntu 22.100.31820.63640.95461.27281.591SE +/- 0.002, N = 31.4141. (CXX) g++ options: -O3

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.108K16K24K32K40K382361. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K891211. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.107K14K21K28K35K310381. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.1015K30K45K60K75K721631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.108K16K24K32K40K371821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K862771. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.108K16K24K32K40K371401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K865411. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.108K16K24K32K40K380081. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K883151. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.108K16K24K32K40K356821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K828611. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.108K16K24K32K40K390631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K907681. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.108K16K24K32K40K384401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K893321. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetUbuntu 22.104080120160200SE +/- 0.36, N = 3162.47

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.43, N = 3206.94

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.42, N = 3235.17

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.32, N = 3256.55

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.08, N = 3264.59

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.12, N = 3117.39

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.32, N = 339.70

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.44, N = 3113.16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.08, N = 338.51

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.22, N = 3111.20

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.05, N = 337.68

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.20, N = 3109.10

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.02, N = 337.06

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.36, N = 3109.34

Device: CPU - Batch Size: 512 - Model: ResNet-50

Ubuntu 22.10: The test quit with a non-zero exit status.

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Ubuntu 22.10816243240SE +/- 0.03, N = 332.811. (CC) gcc options: -O2 -lz

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeUbuntu 22.10816243240SE +/- 0.11, N = 332.351. RawTherapee, version 5.8, command line.

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.04, N = 312.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 3.09, N = 3967.89

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.05

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.101122334455SE +/- 0.20, N = 350.40

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.71, N = 3236.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.24, N = 332.18

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.23, N = 331.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 374.69

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200SE +/- 0.49, N = 3159.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.101326395265SE +/- 0.15, N = 356.83

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.1048121620SE +/- 0.05, N = 317.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.21, N = 3153.13

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 378.24

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.14, N = 396.59

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 310.35

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 3103.90

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.26, N = 3115.35

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 380.03

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 312.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.101224364860SE +/- 0.07, N = 352.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.46, N = 3227.14

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10918273645SE +/- 0.14, N = 340.32

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10612182430SE +/- 0.09, N = 324.80

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.14, N = 312.19

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 6.80, N = 3964.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.48

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 65972.61, N = 153124943.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 38170.43, N = 43127334.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 41614.46, N = 33456218.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPUbuntu 22.10160320480640800SE +/- 1.45, N = 3742.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAUbuntu 22.10150300450600750SE +/- 1.84, N = 3681.801. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexUbuntu 22.10800K1600K2400K3200K4000KSE +/- 33712.97, N = 153538590.311. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDUbuntu 22.10400800120016002000SE +/- 18.36, N = 32049.371. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexUbuntu 22.104M8M12M16M20MSE +/- 110264.57, N = 1516748823.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicUbuntu 22.1070K140K210K280K350KSE +/- 7871.09, N = 15344361.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoUbuntu 22.109K18K27K36K45KSE +/- 294.46, N = 1542378.791. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocUbuntu 22.108M16M24M32M40MSE +/- 149024.22, N = 336241645.591. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingUbuntu 22.1020K40K60K80K100KSE +/- 721.90, N = 3113514.431. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringUbuntu 22.106K12K18K24K30KSE +/- 56.03, N = 327676.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEUbuntu 22.10130K260K390K520K650KSE +/- 3074.02, N = 3588014.941. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheUbuntu 22.1020406080100SE +/- 1.23, N = 1598.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressUbuntu 22.1011K22K33K44K55KSE +/- 538.64, N = 351634.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresUbuntu 22.10800K1600K2400K3200K4000KSE +/- 1451.53, N = 33538392.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathUbuntu 22.1020K40K60K80K100KSE +/- 588.05, N = 3109789.421. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathUbuntu 22.1030K60K90K120K150KSE +/- 966.54, N = 9119832.031. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandUbuntu 22.1020K40K60K80K100KSE +/- 9.58, N = 382767.761. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingUbuntu 22.1016003200480064008000SE +/- 10.70, N = 37385.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityUbuntu 22.105K10K15K20K25KSE +/- 568.16, N = 1224287.071. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingUbuntu 22.103M6M9M12M15MSE +/- 181309.12, N = 414703175.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsUbuntu 22.10900K1800K2700K3600K4500KSE +/- 40661.15, N = 154307014.941. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingUbuntu 22.1090180270360450SE +/- 0.68, N = 3411.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingUbuntu 22.103M6M9M12M15MSE +/- 181986.71, N = 313432520.511. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgUbuntu 22.104K8K12K16K20KSE +/- 31.07, N = 320855

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfUbuntu 22.105001000150020002500SE +/- 23.73, N = 32523

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyUbuntu 22.101224364860SE +/- 0.16, N = 351.61

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyUbuntu 22.10306090120150SE +/- 0.34, N = 3147.64

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyUbuntu 22.1020406080100SE +/- 0.11, N = 375.32

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyUbuntu 22.10120240360480600SE +/- 0.42, N = 3576.35

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyUbuntu 22.104080120160200SE +/- 0.13, N = 3178.95

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeUbuntu 22.10306090120150SE +/- 0.00, N = 3132

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.101.1432.2863.4294.5725.715SE +/- 0.01, N = 35.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.1030060090012001500SE +/- 3.93, N = 31570.46MIN: 1396.06 / MAX: 1856.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.100.80551.6112.41653.2224.0275SE +/- 0.01, N = 33.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.105001000150020002500SE +/- 6.08, N = 32222.92MIN: 1682.82 / MAX: 2975.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.100.79881.59762.39643.19523.994SE +/- 0.02, N = 33.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.105001000150020002500SE +/- 5.80, N = 32238.02MIN: 1692.38 / MAX: 2991.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.1080160240320400SE +/- 1.93, N = 3372.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.10510152025SE +/- 0.11, N = 321.43MIN: 12.24 / MAX: 94.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 318.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1090180270360450SE +/- 0.21, N = 3438.23MIN: 270.29 / MAX: 1085.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.102004006008001000SE +/- 0.85, N = 3916.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.10246810SE +/- 0.01, N = 38.72MIN: 5.93 / MAX: 54.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.10100200300400500SE +/- 0.50, N = 3468.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.101224364860SE +/- 0.05, N = 351.14MIN: 22.47 / MAX: 182.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.101428425670SE +/- 0.09, N = 363.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.10306090120150SE +/- 0.17, N = 3125.97MIN: 91.09 / MAX: 325.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.10400800120016002000SE +/- 1.15, N = 31638.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 314.63MIN: 6.68 / MAX: 120.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.10160320480640800SE +/- 2.34, N = 3728.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.103691215SE +/- 0.03, N = 310.96MIN: 7.62 / MAX: 52.61. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.103K6K9K12K15KSE +/- 10.91, N = 314593.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.100.3690.7381.1071.4761.845SE +/- 0.00, N = 31.64MIN: 0.87 / MAX: 9.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.107K14K21K28K35KSE +/- 40.02, N = 333018.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.100.1620.3240.4860.6480.81SE +/- 0.00, N = 30.72MIN: 0.42 / MAX: 4.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomUbuntu 22.100.91151.8232.73453.6464.5575SE +/- 0.040, N = 34.051

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarUbuntu 22.103691215SE +/- 0.02, N = 311.80

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesUbuntu 22.10100200300400500SE +/- 0.33, N = 3474

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goUbuntu 22.10306090120150SE +/- 0.33, N = 3112

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ubuntu 22.10306090120150SE +/- 0.58, N = 3158

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosUbuntu 22.101020304050SE +/- 0.13, N = 344.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatUbuntu 22.101122334455SE +/- 0.10, N = 346.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyUbuntu 22.101428425670SE +/- 0.32, N = 361.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibUbuntu 22.10246810SE +/- 0.02, N = 38.77

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceUbuntu 22.1050100150200250SE +/- 0.88, N = 3208

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsUbuntu 22.103691215SE +/- 0.00, N = 311.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesUbuntu 22.101122334455SE +/- 0.15, N = 350.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileUbuntu 22.1020406080100SE +/- 0.30, N = 382.1

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 22.10246810SE +/- 0.04, N = 37.39

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateUbuntu 22.10510152025SE +/- 0.06, N = 322.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonUbuntu 22.104080120160200SE +/- 0.67, N = 3197

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipUbuntu 22.10246810SE +/- 0.03, N = 36.7

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardUbuntu 22.102K4K6K8K10KSE +/- 19.87, N = 3105781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardUbuntu 22.10150300450600750SE +/- 0.29, N = 36871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardUbuntu 22.1030060090012001500SE +/- 0.60, N = 312101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardUbuntu 22.10306090120150SE +/- 0.17, N = 31331. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardUbuntu 22.10130260390520650SE +/- 0.17, N = 35981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardUbuntu 22.1015003000450060007500SE +/- 2.84, N = 368421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyUbuntu 22.104080120160200164.94

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialUbuntu 22.102040608010084.86

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterUbuntu 22.102040608010090.78

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteUbuntu 22.10300K600K900K1200K1500KSE +/- 2367.31, N = 31617596

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsUbuntu 22.10510152025SE +/- 0.17, N = 319.18

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsUbuntu 22.10510152025SE +/- 0.23, N = 319.30

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsUbuntu 22.10510152025SE +/- 0.24, N = 321.86

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsUbuntu 22.10510152025SE +/- 0.18, N = 318.55

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateUbuntu 22.100.00020.00040.00060.00080.001SE +/- 0.000, N = 150.001

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingUbuntu 22.100.00090.00180.00270.00360.0045SE +/- 0.000, N = 150.004

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of State

Ubuntu 22.10: The test run did not produce a result.

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing

Ubuntu 22.10: The test run did not produce a result.

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUUbuntu 22.106K12K18K24K30KSE +/- 190.70, N = 328734

CloudSuite Graph Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 63.87, N = 39985

CloudSuite In-Memory Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 47.35, N = 310160

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

Connections: 1

Ubuntu 22.10: The test quit with a non-zero exit status.

Connections: 20

Ubuntu 22.10: The test quit with a non-zero exit status.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Ubuntu 22.1040K80K120K160K200KSE +/- 1164.13, N = 3204910.461. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Ubuntu 22.1040K80K120K160K200KSE +/- 636.13, N = 3205841.241. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 22.1040K80K120K160K200KSE +/- 492.12, N = 3203069.781. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Ubuntu 22.1040K80K120K160K200KSE +/- 624.40, N = 3192021.951. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

364 Results Shown

DDraceNetwork
DDraceNetwork
DDraceNetwork
DDraceNetwork
DDraceNetwork
DDraceNetwork
DDraceNetwork
DDraceNetwork
Tesseract:
  1920 x 1080
  3840 x 2160
Unvanquished:
  1920 x 1080 - High
  3840 x 2160 - High
  1920 x 1080 - Ultra
  3840 x 2160 - Ultra
Warsow:
  1920 x 1080
  3840 x 2160
Xonotic:
  1920 x 1080 - Ultra
  3840 x 2160 - Ultra
  1920 x 1080 - Ultimate
  3840 x 2160 - Ultimate
QuantLib
High Performance Conjugate Gradient
NAS Parallel Benchmarks:
  BT.C
  CG.C
  EP.C
  EP.D
  FT.C
  IS.D
  LU.C
  MG.C
  SP.B
  SP.C
Rodinia:
  OpenMP LavaMD
  OpenMP HotSpot3D
  OpenMP Leukocyte
  OpenMP CFD Solver
  OpenMP Streamcluster
NAMD
Polyhedron Fortran Benchmarks:
  ac
  air
  mdbx
  doduc
  linpk
  tfft2
  aermod
  rnflow
  induct2
  protein
  capacita
  channel2
  fatigue2
  gas_dyn2
  test_fpu2
  mp_prop_design
NWChem
OpenFOAM:
  drivaerFastback, Small Mesh Size - Mesh Time
  drivaerFastback, Small Mesh Size - Execution Time
OpenRadioss:
  Bumper Beam
  Cell Phone Drop Test
  Bird Strike on Windshield
  Rubber O-Ring Seal Installation
Xmrig:
  Monero - 1M
  Wownero - 1M
Chia Blockchain VDF:
  Square Plain C++
  Square Assembly Optimized
Java Gradle Build
DaCapo Benchmark:
  H2
  Jython
  Tradesoap
  Tradebeans
Renaissance:
  Scala Dotty
  Rand Forest
  ALS Movie Lens
  Apache Spark ALS
  Apache Spark Bayes
  Savina Reactors.IO
  Apache Spark PageRank
  Finagle HTTP Requests
  In-Memory Database Shootout
  Akka Unbalanced Cobwebbed Tree
  Genetic Algorithm Using Jenetics + Futures
Zstd Compression:
  19 - Compression Speed
  19 - Decompression Speed
  19, Long Mode - Compression Speed
  19, Long Mode - Decompression Speed
JPEG XL libjxl:
  PNG - 80
  PNG - 90
  JPEG - 80
  JPEG - 90
  PNG - 100
  JPEG - 100
WebP Image Encode:
  Default
  Quality 100
  Quality 100, Lossless
  Quality 100, Highest Compression
  Quality 100, Lossless, Highest Compression
srsRAN:
  OFDM_Test
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
  4G PHY_DL_Test 100 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 256-QAM
  4G PHY_DL_Test 100 PRB MIMO 256-QAM
  4G PHY_DL_Test 100 PRB SISO 256-QAM
  4G PHY_DL_Test 100 PRB SISO 256-QAM
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM
LibRaw
Node.js Express HTTP Load Test
AOM AV1:
  Speed 6 Realtime - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 10 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
SVT-HEVC:
  1 - Bosphorus 4K
  7 - Bosphorus 4K
  10 - Bosphorus 4K
SVT-VP9:
  VMAF Optimized - Bosphorus 4K
  PSNR/SSIM Optimized - Bosphorus 4K
  Visual Quality Optimized - Bosphorus 4K
Intel Open Image Denoise
OpenVKL
7-Zip Compression:
  Compression Rating
  Decompression Rating
Stargate Digital Audio Workstation:
  44100 - 512
  96000 - 512
  44100 - 1024
  96000 - 1024
Timed Linux Kernel Compilation:
  defconfig
  allmodconfig
Timed Node.js Compilation
Timed CPython Compilation:
  Default
  Released Build, PGO + LTO Optimized
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 3D - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
OSPRay Studio:
  1 - 4K - 1 - Path Tracer
  3 - 4K - 1 - Path Tracer
  1 - 4K - 32 - Path Tracer
  3 - 4K - 32 - Path Tracer
  1 - 1080p - 1 - Path Tracer
  3 - 1080p - 1 - Path Tracer
  1 - 1080p - 32 - Path Tracer
  3 - 1080p - 32 - Path Tracer
Timed Wasmer Compilation
FFmpeg:
  libx264 - Live:
    Seconds
    FPS
  libx265 - Live:
    Seconds
    FPS
  libx264 - Upload:
    Seconds
    FPS
  libx265 - Upload:
    Seconds
    FPS
  libx264 - Platform:
    Seconds
    FPS
  libx265 - Platform:
    Seconds
    FPS
  libx264 - Video On Demand:
    Seconds
    FPS
  libx265 - Video On Demand:
    Seconds
    FPS
Cpuminer-Opt:
  Magi
  x25x
  scrypt
  Deepcoin
  Ringcoin
  Blake-2 S
  Garlicoin
  Skeincoin
  Myriad-Groestl
  LBC, LBRY Credits
  Quad SHA-256, Pyrite
  Triple SHA-256, Onecoin
OpenSSL:
  SHA256
  RSA4096
  RSA4096
Node.js V8 Web Tooling Benchmark
Liquid-DSP
ClickHouse:
  100M Rows Web Analytics Dataset, First Run / Cold Cache
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, Third Run
Apache Spark:
  1000000 - 100 - SHA-512 Benchmark Time
  1000000 - 100 - Calculate Pi Benchmark
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Inner Join Test Time
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 500 - SHA-512 Benchmark Time
  1000000 - 500 - Calculate Pi Benchmark
  1000000 - 500 - Calculate Pi Benchmark Using Dataframe
  1000000 - 500 - Group By Test Time
  1000000 - 500 - Repartition Test Time
  1000000 - 500 - Inner Join Test Time
  1000000 - 500 - Broadcast Inner Join Test Time
FinanceBench:
  Repo OpenMP
  Bonds OpenMP
GROMACS
HammerDB - MariaDB:
  8 - 100:
    New Orders Per Minute
    Transactions Per Minute
  8 - 250:
    New Orders Per Minute
    Transactions Per Minute
  16 - 100:
    New Orders Per Minute
    Transactions Per Minute
  16 - 250:
    New Orders Per Minute
    Transactions Per Minute
  32 - 100:
    New Orders Per Minute
    Transactions Per Minute
  32 - 250:
    New Orders Per Minute
    Transactions Per Minute
  64 - 100:
    New Orders Per Minute
    Transactions Per Minute
  64 - 250:
    New Orders Per Minute
    Transactions Per Minute
TensorFlow:
  CPU - 16 - AlexNet
  CPU - 32 - AlexNet
  CPU - 64 - AlexNet
  CPU - 256 - AlexNet
  CPU - 512 - AlexNet
  CPU - 16 - GoogLeNet
  CPU - 16 - ResNet-50
  CPU - 32 - GoogLeNet
  CPU - 32 - ResNet-50
  CPU - 64 - GoogLeNet
  CPU - 64 - ResNet-50
  CPU - 256 - GoogLeNet
  CPU - 256 - ResNet-50
  CPU - 512 - GoogLeNet
SQLite Speedtest
RawTherapee
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
memtier_benchmark:
  Redis - 50 - 1:1
  Redis - 50 - 10:1
  Redis - 50 - 1:10
Stress-NG:
  MMAP
  NUMA
  Futex
  MEMFD
  Mutex
  Atomic
  Crypto
  Malloc
  Forking
  IO_uring
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  x86_64 RdRand
  Memory Copying
  Socket Activity
  Context Switching
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
spaCy:
  en_core_web_lg
  en_core_web_trf
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
ctx_clock
OpenVINO:
  Face Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP32 - CPU:
    FPS
    ms
  Vehicle Detection FP16 - CPU:
    FPS
    ms
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
  Vehicle Detection FP16-INT8 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16-INT8 - CPU:
    FPS
    ms
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
    ms
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
PyBench
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
Natron
ONNX Runtime:
  GPT-2 - CPU - Standard
  yolov4 - CPU - Standard
  bertsquad-12 - CPU - Standard
  fcn-resnet101-11 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
  super-resolution-10 - CPU - Standard
Appleseed:
  Emily
  Disney Material
  Material Tester
PHPBench
EnCodec:
  3 kbps
  6 kbps
  24 kbps
  1.5 kbps
PyHPC Benchmarks:
  CPU - Numpy - 16384 - Equation of State
  CPU - Numpy - 16384 - Isoneutral Mixing
Chaos Group V-RAY
CloudSuite Graph Analytics
CloudSuite In-Memory Analytics
nginx:
  100
  200
  500
  1000