Core i9 13900K Linux Distros

Intel Core i9-13900K testing with a ASUS PRIME Z790-P WIFI (0602 BIOS) and AMD Radeon RX 6800 XT 16GB on Ubuntu 22.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2211045-NE-COREI913911.

Core i9 13900K Linux DistrosProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionUbuntu 22.10Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB1000GB Western Digital WDS100T1X0E-00AFY0AMD Radeon RX 6800 XT 16GB (2575/1000MHz)Realtek ALC897ASUS VP28URealtek RTL8125 2.5GbE + Intel Device 7a70Ubuntu 22.105.19.0-23-generic (x86_64)GNOME Shell 43.0X Server + Wayland4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)1.3.224GCC 12.2.0ext43840x2160OpenBenchmarking.org- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x10e - Thermald 2.5.1 - BAR1 / Visible vRAM Size: 16368 MB - vBIOS Version: 113-D4120500-101- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu1)- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Core i9 13900K Linux Distrosddnet: 1920 x 1080 - Fullscreen - Vulkan - Default - RaiNyMore2ddnet: 3840 x 2160 - Fullscreen - Vulkan - Default - RaiNyMore2ddnet: 1920 x 1080 - Fullscreen - Vulkan - Default - Multeasymapddnet: 3840 x 2160 - Fullscreen - Vulkan - Default - Multeasymaptesseract: 1920 x 1080tesseract: 3840 x 2160unvanquished: 1920 x 1080 - Highunvanquished: 3840 x 2160 - Highunvanquished: 1920 x 1080 - Ultraunvanquished: 3840 x 2160 - Ultrawarsow: 1920 x 1080warsow: 3840 x 2160xonotic: 1920 x 1080 - Ultraxonotic: 3840 x 2160 - Ultraxonotic: 1920 x 1080 - Ultimatexonotic: 3840 x 2160 - Ultimatequantlib: hpcg: npb: BT.Cnpb: CG.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Bnpb: SP.Crodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomspolyhedron: acpolyhedron: airpolyhedron: mdbxpolyhedron: doducpolyhedron: linpkpolyhedron: tfft2polyhedron: aermodpolyhedron: rnflowpolyhedron: induct2polyhedron: proteinpolyhedron: capacitapolyhedron: channel2polyhedron: fatigue2polyhedron: gas_dyn2polyhedron: test_fpu2polyhedron: mp_prop_designnwchem: C240 Buckyballopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testopenradioss: Bird Strike on Windshieldopenradioss: Rubber O-Ring Seal Installationxmrig: Monero - 1Mxmrig: Wownero - 1Mchia-vdf: Square Plain C++chia-vdf: Square Assembly Optimizedjava-gradle-perf: Reactordacapobench: H2dacapobench: Jythondacapobench: Tradesoapdacapobench: Tradebeansrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futurescompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedjpegxl: PNG - 80jpegxl: PNG - 90jpegxl: JPEG - 80jpegxl: JPEG - 90jpegxl: PNG - 100jpegxl: JPEG - 100webp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionsrsran: OFDM_Testsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMlibraw: Post-Processing Benchmarknode-express-loadtest: aom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-hevc: 1 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Ksvt-vp9: VMAF Optimized - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 4Koidn: RT.ldr_alb_nrm.3840x2160openvkl: vklBenchmark ISPCcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingstargate: 44100 - 512stargate: 96000 - 512stargate: 44100 - 1024stargate: 96000 - 1024build-linux-kernel: defconfigbuild-linux-kernel: allmodconfigbuild-nodejs: Time To Compilebuild-python: Defaultbuild-python: Released Build, PGO + LTO Optimizedonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 3 - 4K - 1 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerospray-studio: 3 - 4K - 32 - Path Tracerospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 32 - Path Tracerospray-studio: 3 - 1080p - 32 - Path Tracerbuild-wasmer: Time To Compileffmpeg: libx264 - Liveffmpeg: libx264 - Liveffmpeg: libx265 - Liveffmpeg: libx265 - Liveffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadffmpeg: libx264 - Platformffmpeg: libx264 - Platformffmpeg: libx265 - Platformffmpeg: libx265 - Platformffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandcpuminer-opt: Magicpuminer-opt: x25xcpuminer-opt: scryptcpuminer-opt: Deepcoincpuminer-opt: Ringcoincpuminer-opt: Blake-2 Scpuminer-opt: Garlicoincpuminer-opt: Skeincoincpuminer-opt: Myriad-Groestlcpuminer-opt: LBC, LBRY Creditscpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Triple SHA-256, Onecoinopenssl: SHA256openssl: RSA4096openssl: RSA4096node-web-tooling: liquid-dsp: 8 - 256 - 57clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, Third Runspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 500 - SHA-512 Benchmark Timespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Broadcast Inner Join Test Timefinancebench: Repo OpenMPfinancebench: Bonds OpenMPgromacs: MPI CPU - water_GMX50_barehammerdb-mariadb: 8 - 100hammerdb-mariadb: 8 - 100hammerdb-mariadb: 8 - 250hammerdb-mariadb: 8 - 250hammerdb-mariadb: 16 - 100hammerdb-mariadb: 16 - 100hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 250hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 100hammerdb-mariadb: 32 - 250hammerdb-mariadb: 32 - 250hammerdb-mariadb: 64 - 100hammerdb-mariadb: 64 - 100hammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 250tensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNetsqlite-speedtest: Timed Time - Size 1,000rawtherapee: Total Benchmark Timedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streammemtier-benchmark: Redis - 50 - 1:1memtier-benchmark: Redis - 50 - 10:1memtier-benchmark: Redis - 50 - 1:10stress-ng: MMAPstress-ng: NUMAstress-ng: Futexstress-ng: MEMFDstress-ng: Mutexstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Forkingstress-ng: IO_uringstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: x86_64 RdRandstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingspacy: en_core_web_lgspacy: en_core_web_trfblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyctx-clock: Context Switch Timeopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUindigobench: CPU - Bedroomindigobench: CPU - Supercarpybench: Total For Average Test Timespyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonnatron: Spaceshiponnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardappleseed: Emilyappleseed: Disney Materialappleseed: Material Testerphpbench: PHP Benchmark Suiteencodec: 3 kbpsencodec: 6 kbpsencodec: 24 kbpsencodec: 1.5 kbpspyhpc: CPU - Numpy - 16384 - Equation of Statepyhpc: CPU - Numpy - 16384 - Isoneutral Mixingv-ray: CPUcloudsuite-ga: cloudsuite-ma: nginx: 100nginx: 200nginx: 500nginx: 1000minibude: OpenMP - BM1minibude: OpenMP - BM1minibude: OpenMP - BM2minibude: OpenMP - BM2Ubuntu 22.104696.932159.595913.122994.51999.4426896.0168683.9665.9671.3664.7965.8951.3696.9289392692.0609456540.6037083527.85715445198.710.154949771.298583.033262.423049.5322382.811263.1653311.8924905.5022786.6015473.9084.64549.66049.9766.1687.2970.609823.770.933.023.381.3412.12.779.5411.076.935.1329.321.8825.8313.9925.774268.427.343094151.34484114.6668.10183.75108.629652.516463.2252933269067142.7982091171016381689454.3384.57650.32026.4693.84193.81902.71992.61957.97183.41063.380.54758.350.94887.713.5813.4313.2513.091.061.0524.9816.162.305.000.91195600000624.9182.0633.1233.3677.5199.2683.7242.6224.3107.670.321814744.8766.0585.3486.523.00477.256147.867216.5175.79105.39202.12142.19155.19123.490.571611821531399816.1453534.8184256.4228354.85469541.396454.880256.70112.029171.5021.904304.079235.772287.631323.443472150.831112.821.4928364831575815718518757912421473390374649530.24514.30353.2427.74182.08129.61913828719.4879.0031.9699.2976.29116.3065.1499.2576.32116.3265.121176.021128.00333.58185205423.947655873496.701563431713052550198680434460359569992335496.9358806.726.33859746667301.66307.65308.971.9951.763.452.631.020.930.772.0352.243.262.441.090.950.7919315.82942730942.0071621.41438236891213103872163371828627737140865413800888315356828286139063907683844089332162.47206.94235.17256.55264.59117.3939.70113.1638.51111.2037.68109.1037.06109.3432.80632.34612.0497967.893311.104790.049550.3953236.601532.182231.070674.6860159.940956.832917.5900153.127378.235996.586010.3497103.8966115.349280.028312.492552.5440227.141440.323224.797012.1856964.435511.051890.48093124943.723127334.753456218.18742.41681.803538590.312049.3716748823.88344361.7542378.7936241645.59113514.4327676.33588014.9498.7751634.543538392.41109789.42119832.0382767.767385.3924287.0714703175.324307014.94411.3913432520.5120855252351.61147.6475.32576.35178.951325.081570.463.582222.923.552238.02372.8521.4318.23438.23916.058.72468.5051.1463.48125.971638.9914.63728.9110.9614593.711.6433018.170.724.05111.80247411215844.946.761.48.7720811.550.782.17.3922.21976.71057868712101335986842164.93999684.85954590.775778161759619.18219.30321.85518.5480.0010.00428734998510160204910.46205841.24203069.78192021.95414.59116.584570.62322.825OpenBenchmarking.org

DDraceNetwork

Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2Ubuntu 22.1010002000300040005000SE +/- 20.98, N = 34696.931. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

DDraceNetwork

Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2 - Total Frame Time

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2 - Total Frame TimeUbuntu 22.10246810Min: 0.03 / Avg: 0.21 / Max: 1.391. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

DDraceNetwork

Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2Ubuntu 22.105001000150020002500SE +/- 2.70, N = 32159.591. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

DDraceNetwork

Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2 - Total Frame Time

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: RaiNyMore2 - Total Frame TimeUbuntu 22.10246810Min: 0.03 / Avg: 0.75 / Max: 3.041. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

DDraceNetwork

Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: MulteasymapUbuntu 22.1013002600390052006500SE +/- 5.38, N = 35913.121. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

DDraceNetwork

Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap - Total Frame Time

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap - Total Frame TimeUbuntu 22.10246810Min: 0.08 / Avg: 0.17 / Max: 1.231. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

DDraceNetwork

Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: MulteasymapUbuntu 22.106001200180024003000SE +/- 3.45, N = 32994.511. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

DDraceNetwork

Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap - Total Frame Time

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 16.3.2Resolution: 3840 x 2160 - Mode: Fullscreen - Renderer: Vulkan - Zoom: Default - Demo: Multeasymap - Total Frame TimeUbuntu 22.10246810Min: 0.09 / Avg: 0.34 / Max: 1.471. (CXX) g++ options: -O3 -lrt -ldl -lvulkan -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -fuse-ld=gold

Tesseract

Resolution: 1920 x 1080

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 1080Ubuntu 22.102004006008001000SE +/- 0.56, N = 3999.44

Tesseract

Resolution: 3840 x 2160

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 3840 x 2160Ubuntu 22.102004006008001000SE +/- 3.57, N = 3896.02

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: High

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighUbuntu 22.10150300450600750SE +/- 2.63, N = 3683.9

Unvanquished

Resolution: 3840 x 2160 - Effects Quality: High

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: HighUbuntu 22.10140280420560700SE +/- 6.27, N = 3665.9

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Ultra

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10140280420560700SE +/- 1.58, N = 3671.3

Unvanquished

Resolution: 3840 x 2160 - Effects Quality: Ultra

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10140280420560700SE +/- 0.21, N = 3664.7

Warsow

Resolution: 1920 x 1080

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Ubuntu 22.102004006008001000SE +/- 1.51, N = 3965.8

Warsow

Resolution: 3840 x 2160

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 3840 x 2160Ubuntu 22.102004006008001000SE +/- 5.47, N = 3951.3

Xonotic

Resolution: 1920 x 1080 - Effects Quality: Ultra

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 1.76, N = 3696.93MIN: 375 / MAX: 1188

Xonotic

Resolution: 3840 x 2160 - Effects Quality: Ultra

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltraUbuntu 22.10150300450600750SE +/- 2.13, N = 3692.06MIN: 411 / MAX: 1142

Xonotic

Resolution: 1920 x 1080 - Effects Quality: Ultimate

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 1920 x 1080 - Effects Quality: UltimateUbuntu 22.10120240360480600SE +/- 0.59, N = 3540.60MIN: 101 / MAX: 1094

Xonotic

Resolution: 3840 x 2160 - Effects Quality: Ultimate

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.5Resolution: 3840 x 2160 - Effects Quality: UltimateUbuntu 22.10110220330440550SE +/- 1.29, N = 3527.86MIN: 98 / MAX: 1077

QuantLib

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Ubuntu 22.1011002200330044005500SE +/- 69.40, N = 35198.71. (CXX) g++ options: -O3 -march=native -rdynamic

High Performance Conjugate Gradient

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Ubuntu 22.103691215SE +/- 0.02, N = 310.151. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CUbuntu 22.1011K22K33K44K55KSE +/- 33.54, N = 349771.291. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: CG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CUbuntu 22.102K4K6K8K10KSE +/- 24.89, N = 38583.031. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: EP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CUbuntu 22.107001400210028003500SE +/- 0.66, N = 33262.421. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DUbuntu 22.107001400210028003500SE +/- 10.67, N = 33049.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CUbuntu 22.105K10K15K20K25KSE +/- 373.18, N = 1522382.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: IS.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DUbuntu 22.1030060090012001500SE +/- 15.74, N = 41263.161. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: LU.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CUbuntu 22.1011K22K33K44K55KSE +/- 218.90, N = 353311.891. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CUbuntu 22.105K10K15K20K25KSE +/- 295.81, N = 324905.501. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BUbuntu 22.105K10K15K20K25KSE +/- 227.34, N = 322786.601. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

NAS Parallel Benchmarks

Test / Class: SP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CUbuntu 22.103K6K9K12K15KSE +/- 32.81, N = 315473.901. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

Rodinia

Test: OpenMP LavaMD

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDUbuntu 22.1020406080100SE +/- 0.14, N = 384.651. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP HotSpot3D

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DUbuntu 22.101122334455SE +/- 0.32, N = 1549.661. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP Leukocyte

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteUbuntu 22.101122334455SE +/- 0.10, N = 349.981. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP CFD Solver

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverUbuntu 22.10246810SE +/- 0.053, N = 86.1681. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP Streamcluster

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterUbuntu 22.10246810SE +/- 0.015, N = 37.2971. (CXX) g++ options: -O2 -lOpenCL

NAMD

ATPase Simulation - 327,506 Atoms

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsUbuntu 22.100.13720.27440.41160.54880.686SE +/- 0.00121, N = 30.60982

Polyhedron Fortran Benchmarks

Benchmark: ac

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acUbuntu 22.100.84831.69662.54493.39324.24153.77

Polyhedron Fortran Benchmarks

Benchmark: air

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airUbuntu 22.100.20930.41860.62790.83721.04650.93

Polyhedron Fortran Benchmarks

Benchmark: mdbx

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxUbuntu 22.100.67951.3592.03852.7183.39753.02

Polyhedron Fortran Benchmarks

Benchmark: doduc

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducUbuntu 22.100.76051.5212.28153.0423.80253.38

Polyhedron Fortran Benchmarks

Benchmark: linpk

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkUbuntu 22.100.30150.6030.90451.2061.50751.34

Polyhedron Fortran Benchmarks

Benchmark: tfft2

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Ubuntu 22.10369121512.1

Polyhedron Fortran Benchmarks

Benchmark: aermod

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodUbuntu 22.100.62331.24661.86992.49323.11652.77

Polyhedron Fortran Benchmarks

Benchmark: rnflow

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowUbuntu 22.1036912159.54

Polyhedron Fortran Benchmarks

Benchmark: induct2

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Ubuntu 22.10369121511.07

Polyhedron Fortran Benchmarks

Benchmark: protein

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinUbuntu 22.102468106.93

Polyhedron Fortran Benchmarks

Benchmark: capacita

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaUbuntu 22.101.15432.30863.46294.61725.77155.13

Polyhedron Fortran Benchmarks

Benchmark: channel2

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Ubuntu 22.1071421283529.3

Polyhedron Fortran Benchmarks

Benchmark: fatigue2

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Ubuntu 22.1051015202521.88

Polyhedron Fortran Benchmarks

Benchmark: gas_dyn2

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Ubuntu 22.1061218243025.83

Polyhedron Fortran Benchmarks

Benchmark: test_fpu2

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Ubuntu 22.104812162013.99

Polyhedron Fortran Benchmarks

Benchmark: mp_prop_design

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designUbuntu 22.1061218243025.77

NWChem

Input: C240 Buckyball

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballUbuntu 22.1090018002700360045004268.41. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeUbuntu 22.1061218243027.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeUbuntu 22.10306090120150151.341. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenRadioss

Model: Bumper Beam

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamUbuntu 22.10306090120150SE +/- 0.19, N = 3114.66

OpenRadioss

Model: Cell Phone Drop Test

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestUbuntu 22.101530456075SE +/- 0.98, N = 368.10

OpenRadioss

Model: Bird Strike on Windshield

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldUbuntu 22.104080120160200SE +/- 2.45, N = 3183.75

OpenRadioss

Model: Rubber O-Ring Seal Installation

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationUbuntu 22.1020406080100SE +/- 0.31, N = 3108.62

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MUbuntu 22.102K4K6K8K10KSE +/- 65.64, N = 39652.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MUbuntu 22.104K8K12K16K20KSE +/- 35.72, N = 316463.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Chia Blockchain VDF

Test: Square Plain C++

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Ubuntu 22.1050K100K150K200K250KSE +/- 133.33, N = 32529331. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Chia Blockchain VDF

Test: Square Assembly Optimized

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedUbuntu 22.1060K120K180K240K300KSE +/- 218.58, N = 32690671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Java Gradle Build

Gradle Build: Reactor

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorUbuntu 22.10306090120150SE +/- 1.44, N = 6142.80

DaCapo Benchmark

Java Test: H2

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 22.10400800120016002000SE +/- 34.76, N = 202091

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonUbuntu 22.10400800120016002000SE +/- 8.38, N = 41710

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapUbuntu 22.10400800120016002000SE +/- 14.59, N = 71638

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansUbuntu 22.10400800120016002000SE +/- 4.66, N = 41689

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 22.10100200300400500SE +/- 6.59, N = 15454.3MIN: 344.62 / MAX: 815.12

Renaissance

Test: Random Forest

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 22.1080160240320400SE +/- 0.53, N = 3384.5MIN: 357.94 / MAX: 465.15

Renaissance

Test: ALS Movie Lens

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 22.1016003200480064008000SE +/- 72.83, N = 37650.3MIN: 7526.04 / MAX: 8473.18

Renaissance

Test: Apache Spark ALS

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSUbuntu 22.10400800120016002000SE +/- 6.70, N = 32026.4MIN: 1949.41 / MAX: 2109.84

Renaissance

Test: Apache Spark Bayes

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 22.10150300450600750SE +/- 1.16, N = 3693.8MIN: 500.78 / MAX: 696.1

Renaissance

Test: Savina Reactors.IO

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 22.109001800270036004500SE +/- 48.71, N = 34193.8MIN: 4126.1 / MAX: 6173.56

Renaissance

Test: Apache Spark PageRank

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 22.10400800120016002000SE +/- 15.45, N = 91902.7MIN: 1741.39 / MAX: 1997.09

Renaissance

Test: Finagle HTTP Requests

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 22.10400800120016002000SE +/- 22.11, N = 31992.6MIN: 1796.31 / MAX: 2233.14

Renaissance

Test: In-Memory Database Shootout

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 22.10400800120016002000SE +/- 26.27, N = 31957.9MIN: 1750.66 / MAX: 2219.24

Renaissance

Test: Akka Unbalanced Cobwebbed Tree

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 22.1015003000450060007500SE +/- 20.79, N = 37183.4MIN: 5471.82 / MAX: 7209.81

Renaissance

Test: Genetic Algorithm Using Jenetics + Futures

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesUbuntu 22.102004006008001000SE +/- 8.08, N = 151063.3MIN: 960.83 / MAX: 1135.87

Zstd Compression

Compression Level: 19 - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 22.1020406080100SE +/- 1.02, N = 380.51. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Zstd Compression

Compression Level: 19 - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 22.1010002000300040005000SE +/- 0.18, N = 34758.31. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Zstd Compression

Compression Level: 19, Long Mode - Compression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 22.101122334455SE +/- 0.40, N = 350.91. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Zstd Compression

Compression Level: 19, Long Mode - Decompression Speed

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 22.1010002000300040005000SE +/- 9.98, N = 34887.71. *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

JPEG XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Ubuntu 22.103691215SE +/- 0.01, N = 313.581. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Ubuntu 22.103691215SE +/- 0.01, N = 313.431. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Ubuntu 22.103691215SE +/- 0.02, N = 313.251. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Ubuntu 22.103691215SE +/- 0.01, N = 313.091. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Ubuntu 22.100.23850.4770.71550.9541.1925SE +/- 0.00, N = 31.061. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Ubuntu 22.100.23630.47260.70890.94521.1815SE +/- 0.00, N = 31.051. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

WebP Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultUbuntu 22.10612182430SE +/- 0.28, N = 324.981. (CC) gcc options: -fvisibility=hidden -O2 -lm

WebP Image Encode

Encode Settings: Quality 100

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Ubuntu 22.1048121620SE +/- 0.19, N = 316.161. (CC) gcc options: -fvisibility=hidden -O2 -lm

WebP Image Encode

Encode Settings: Quality 100, Lossless

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessUbuntu 22.100.51751.0351.55252.072.5875SE +/- 0.00, N = 32.301. (CC) gcc options: -fvisibility=hidden -O2 -lm

WebP Image Encode

Encode Settings: Quality 100, Highest Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionUbuntu 22.101.1252.253.3754.55.625SE +/- 0.01, N = 35.001. (CC) gcc options: -fvisibility=hidden -O2 -lm

WebP Image Encode

Encode Settings: Quality 100, Lossless, Highest Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionUbuntu 22.100.20480.40960.61440.81921.024SE +/- 0.00, N = 30.911. (CC) gcc options: -fvisibility=hidden -O2 -lm

srsRAN

Test: OFDM_Test

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestUbuntu 22.1040M80M120M160M200MSE +/- 360555.13, N = 31956000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.10130260390520650SE +/- 2.64, N = 3624.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMUbuntu 22.104080120160200SE +/- 0.40, N = 3182.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.10140280420560700SE +/- 1.34, N = 3633.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.12, N = 3233.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.10150300450600750SE +/- 5.48, N = 3677.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMUbuntu 22.104080120160200SE +/- 0.71, N = 3199.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.10150300450600750SE +/- 8.14, N = 3683.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMUbuntu 22.1050100150200250SE +/- 2.07, N = 3242.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1050100150200250SE +/- 0.24, N = 3224.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

srsRAN

Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMUbuntu 22.1020406080100SE +/- 0.24, N = 3107.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

LibRaw

Post-Processing Benchmark

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkUbuntu 22.101632486480SE +/- 0.39, N = 370.321. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Node.js Express HTTP Load Test

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestUbuntu 22.104K8K12K16K20KSE +/- 43.21, N = 318147

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KUbuntu 22.101020304050SE +/- 0.13, N = 344.871. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KUbuntu 22.101530456075SE +/- 0.50, N = 366.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.15, N = 385.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.10, N = 386.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 22.100.67591.35182.02772.70363.3795SE +/- 0.026, N = 33.0041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 0.77, N = 377.261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 10 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 1.70, N = 3147.871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KUbuntu 22.1050100150200250SE +/- 2.00, N = 3216.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-HEVC

Tuning: 1 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KUbuntu 22.101.30282.60563.90845.21126.514SE +/- 0.02, N = 35.791. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-HEVC

Tuning: 7 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KUbuntu 22.1020406080100SE +/- 1.00, N = 3105.391. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-HEVC

Tuning: 10 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KUbuntu 22.104080120160200SE +/- 1.60, N = 15202.121. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

Tuning: VMAF Optimized - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 3.19, N = 12142.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-VP9

Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 0.68, N = 3155.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-VP9

Tuning: Visual Quality Optimized - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KUbuntu 22.10306090120150SE +/- 0.42, N = 3123.491. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Ubuntu 22.100.12830.25660.38490.51320.6415SE +/- 0.00, N = 30.57

OpenVKL

Benchmark: vklBenchmark ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCUbuntu 22.104080120160200SE +/- 0.67, N = 3161MIN: 11 / MAX: 1931

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingUbuntu 22.1040K80K120K160K200KSE +/- 1518.09, N = 31821531. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingUbuntu 22.1030K60K90K120K150KSE +/- 1890.40, N = 31399811. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stargate Digital Audio Workstation

Sample Rate: 44100 - Buffer Size: 512

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Ubuntu 22.10246810SE +/- 0.006537, N = 36.1453531. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Stargate Digital Audio Workstation

Sample Rate: 96000 - Buffer Size: 512

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Ubuntu 22.101.08412.16823.25234.33645.4205SE +/- 0.002423, N = 34.8184251. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Stargate Digital Audio Workstation

Sample Rate: 44100 - Buffer Size: 1024

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Ubuntu 22.10246810SE +/- 0.011125, N = 36.4228351. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Stargate Digital Audio Workstation

Sample Rate: 96000 - Buffer Size: 1024

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Ubuntu 22.101.09232.18463.27694.36925.4615SE +/- 0.001939, N = 34.8546951. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigUbuntu 22.10918273645SE +/- 0.39, N = 341.40

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigUbuntu 22.10100200300400500SE +/- 0.43, N = 3454.88

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileUbuntu 22.1060120180240300SE +/- 0.04, N = 3256.70

Timed CPython Compilation

Build Configuration: Default

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultUbuntu 22.10369121512.03

Timed CPython Compilation

Build Configuration: Released Build, PGO + LTO Optimized

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedUbuntu 22.104080120160200171.50

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUUbuntu 22.100.42850.8571.28551.7142.1425SE +/- 0.01885, N = 151.90430MIN: 1.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUUbuntu 22.100.91781.83562.75343.67124.589SE +/- 0.02741, N = 34.07923MIN: 4.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUUbuntu 22.101.29882.59763.89645.19526.494SE +/- 0.00365, N = 35.77228MIN: 5.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUUbuntu 22.10246810SE +/- 0.10591, N = 157.63132MIN: 2.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUUbuntu 22.100.77481.54962.32443.09923.874SE +/- 0.00262, N = 33.44347MIN: 3.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUUbuntu 22.105001000150020002500SE +/- 21.59, N = 32150.83MIN: 1989.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUUbuntu 22.102004006008001000SE +/- 1.98, N = 31112.82MIN: 1021.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUUbuntu 22.100.33590.67181.00771.34361.6795SE +/- 0.090916, N = 151.492836MIN: 0.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1010002000300040005000SE +/- 6.89, N = 348311. (CXX) g++ options: -O3 -lm -ldl

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1012002400360048006000SE +/- 5.24, N = 357581. (CXX) g++ options: -O3 -lm -ldl

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1030K60K90K120K150KSE +/- 533.75, N = 31571851. (CXX) g++ options: -O3 -lm -ldl

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1040K80K120K160K200KSE +/- 169.22, N = 31875791. (CXX) g++ options: -O3 -lm -ldl

OSPRay Studio

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 2.73, N = 312421. (CXX) g++ options: -O3 -lm -ldl

OSPRay Studio

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerUbuntu 22.1030060090012001500SE +/- 4.33, N = 314731. (CXX) g++ options: -O3 -lm -ldl

OSPRay Studio

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.108K16K24K32K40KSE +/- 48.68, N = 3390371. (CXX) g++ options: -O3 -lm -ldl

OSPRay Studio

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 22.1010K20K30K40K50KSE +/- 33.28, N = 3464951. (CXX) g++ options: -O3 -lm -ldl

Timed Wasmer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileUbuntu 22.10714212835SE +/- 0.20, N = 330.251. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.1048121620SE +/- 0.02, N = 314.301. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: LiveUbuntu 22.1080160240320400SE +/- 0.46, N = 3353.241. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.10714212835SE +/- 0.06, N = 327.741. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: LiveUbuntu 22.104080120160200SE +/- 0.41, N = 3182.081. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10306090120150SE +/- 0.15, N = 3129.621. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: UploadUbuntu 22.10510152025SE +/- 0.02, N = 319.481. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.1020406080100SE +/- 0.09, N = 379.001. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: UploadUbuntu 22.10714212835SE +/- 0.04, N = 331.961. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.1020406080100SE +/- 0.04, N = 399.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: PlatformUbuntu 22.1020406080100SE +/- 0.03, N = 376.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.10306090120150SE +/- 0.10, N = 3116.301. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: PlatformUbuntu 22.101530456075SE +/- 0.06, N = 365.141. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.1020406080100SE +/- 0.09, N = 399.251. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On DemandUbuntu 22.1020406080100SE +/- 0.07, N = 376.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.10306090120150SE +/- 0.14, N = 3116.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On DemandUbuntu 22.101530456075SE +/- 0.08, N = 365.121. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Cpuminer-Opt

Algorithm: Magi

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiUbuntu 22.1030060090012001500SE +/- 4.42, N = 31176.021. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: x25x

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xUbuntu 22.102004006008001000SE +/- 4.19, N = 31128.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: scrypt

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptUbuntu 22.1070140210280350SE +/- 3.47, N = 3333.581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Deepcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinUbuntu 22.104K8K12K16K20KSE +/- 26.46, N = 3185201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Ringcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinUbuntu 22.1012002400360048006000SE +/- 21.91, N = 35423.941. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Blake-2 S

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SUbuntu 22.10160K320K480K640K800KSE +/- 9180.18, N = 37655871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Garlicoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinUbuntu 22.107001400210028003500SE +/- 37.34, N = 33496.701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Skeincoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinUbuntu 22.1030K60K90K120K150KSE +/- 1874.06, N = 41563431. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Myriad-Groestl

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlUbuntu 22.104K8K12K16K20KSE +/- 98.66, N = 3171301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: LBC, LBRY Credits

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsUbuntu 22.1011K22K33K44K55KSE +/- 141.89, N = 3525501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Quad SHA-256, Pyrite

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteUbuntu 22.1040K80K120K160K200KSE +/- 120.14, N = 31986801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Triple SHA-256, Onecoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinUbuntu 22.1090K180K270K360K450KSE +/- 3729.11, N = 34344601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Ubuntu 22.108000M16000M24000M32000M40000MSE +/- 90907848.90, N = 3359569992331. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.1012002400360048006000SE +/- 5.03, N = 35496.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ubuntu 22.1080K160K240K320K400KSE +/- 142.92, N = 3358806.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Node.js V8 Web Tooling Benchmark

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkUbuntu 22.10612182430SE +/- 0.17, N = 326.33

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ubuntu 22.10200M400M600M800M1000MSE +/- 10982623.14, N = 38597466671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

ClickHouse

100M Rows Web Analytics Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheUbuntu 22.1070140210280350SE +/- 2.14, N = 15301.66MIN: 24.67 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunUbuntu 22.1070140210280350SE +/- 1.50, N = 15307.65MIN: 24.43 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunUbuntu 22.1070140210280350SE +/- 1.54, N = 15308.97MIN: 24.68 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeUbuntu 22.100.44780.89561.34341.79122.239SE +/- 0.01, N = 31.99

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.18, N = 351.76

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.77631.55262.32893.10523.8815SE +/- 0.03, N = 33.45

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeUbuntu 22.100.59181.18361.77542.36722.959SE +/- 0.02, N = 32.63

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeUbuntu 22.100.22950.4590.68850.9181.1475SE +/- 0.01, N = 31.02

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeUbuntu 22.100.20930.41860.62790.83721.0465SE +/- 0.03, N = 30.93

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeUbuntu 22.100.17330.34660.51990.69320.8665SE +/- 0.02, N = 30.77

Apache Spark

Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 22.100.45680.91361.37041.82722.284SE +/- 0.02, N = 152.03

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkUbuntu 22.101224364860SE +/- 0.06, N = 1552.24

Apache Spark

Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 22.100.73351.4672.20052.9343.6675SE +/- 0.04, N = 153.26

Apache Spark

Row Count: 1000000 - Partitions: 500 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeUbuntu 22.100.5491.0981.6472.1962.745SE +/- 0.01, N = 152.44

Apache Spark

Row Count: 1000000 - Partitions: 500 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeUbuntu 22.100.24530.49060.73590.98121.2265SE +/- 0.01, N = 151.09

Apache Spark

Row Count: 1000000 - Partitions: 500 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeUbuntu 22.100.21380.42760.64140.85521.069SE +/- 0.02, N = 150.95

Apache Spark

Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeUbuntu 22.100.17780.35560.53340.71120.889SE +/- 0.01, N = 150.79

FinanceBench

Benchmark: Repo OpenMP

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPUbuntu 22.104K8K12K16K20KSE +/- 17.57, N = 319315.831. (CXX) g++ options: -O3 -march=native -fopenmp

FinanceBench

Benchmark: Bonds OpenMP

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPUbuntu 22.107K14K21K28K35KSE +/- 28.94, N = 330942.011. (CXX) g++ options: -O3 -march=native -fopenmp

GROMACS

Implementation: MPI CPU - Input: water_GMX50_bare

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareUbuntu 22.100.31820.63640.95461.27281.591SE +/- 0.002, N = 31.4141. (CXX) g++ options: -O3

HammerDB - MariaDB

Virtual Users: 8 - Warehouses: 100

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.108K16K24K32K40K382361. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 8 - Warehouses: 100

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K891211. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 8 - Warehouses: 250

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.107K14K21K28K35K310381. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 8 - Warehouses: 250

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 8 - Warehouses: 250Ubuntu 22.1015K30K45K60K75K721631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 16 - Warehouses: 100

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.108K16K24K32K40K371821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 16 - Warehouses: 100

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K862771. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 16 - Warehouses: 250

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.108K16K24K32K40K371401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 16 - Warehouses: 250

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 16 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K865411. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 32 - Warehouses: 100

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.108K16K24K32K40K380081. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 32 - Warehouses: 100

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K883151. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 32 - Warehouses: 250

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.108K16K24K32K40K356821. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 32 - Warehouses: 250

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 32 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K828611. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 64 - Warehouses: 100

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.108K16K24K32K40K390631. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 64 - Warehouses: 100

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 100Ubuntu 22.1020K40K60K80K100K907681. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 64 - Warehouses: 250

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.108K16K24K32K40K384401. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

HammerDB - MariaDB

Virtual Users: 64 - Warehouses: 250

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.9.3Virtual Users: 64 - Warehouses: 250Ubuntu 22.1020K40K60K80K100K893321. (CXX) g++ options: -pie -fPIC -fstack-protector -O2 -lnuma -lpcre2-8 -lcrypt -lsystemd -lz -lm -lssl -lcrypto -lpthread -ldl

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetUbuntu 22.104080120160200SE +/- 0.36, N = 3162.47

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.43, N = 3206.94

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetUbuntu 22.1050100150200250SE +/- 0.42, N = 3235.17

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.32, N = 3256.55

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetUbuntu 22.1060120180240300SE +/- 0.08, N = 3264.59

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.12, N = 3117.39

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.32, N = 339.70

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetUbuntu 22.10306090120150SE +/- 0.44, N = 3113.16

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.08, N = 338.51

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.22, N = 3111.20

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.05, N = 337.68

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.20, N = 3109.10

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 22.10918273645SE +/- 0.02, N = 337.06

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetUbuntu 22.1020406080100SE +/- 0.36, N = 3109.34

SQLite Speedtest

Timed Time - Size 1,000

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Ubuntu 22.10816243240SE +/- 0.03, N = 332.811. (CC) gcc options: -O2 -lz

RawTherapee

Total Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeUbuntu 22.10816243240SE +/- 0.11, N = 332.351. RawTherapee, version 5.8, command line.

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.04, N = 312.05

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 3.09, N = 3967.89

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.10

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.05

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.101122334455SE +/- 0.20, N = 350.40

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.71, N = 3236.60

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.24, N = 332.18

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamUbuntu 22.10714212835SE +/- 0.23, N = 331.07

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 374.69

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamUbuntu 22.104080120160200SE +/- 0.49, N = 3159.94

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.101326395265SE +/- 0.15, N = 356.83

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamUbuntu 22.1048121620SE +/- 0.05, N = 317.59

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.21, N = 3153.13

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 378.24

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.14, N = 396.59

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 310.35

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.1020406080100SE +/- 0.23, N = 3103.90

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamUbuntu 22.10306090120150SE +/- 0.26, N = 3115.35

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.05, N = 380.03

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.01, N = 312.49

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.101224364860SE +/- 0.07, N = 352.54

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamUbuntu 22.1050100150200250SE +/- 0.46, N = 3227.14

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10918273645SE +/- 0.14, N = 340.32

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamUbuntu 22.10612182430SE +/- 0.09, N = 324.80

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.103691215SE +/- 0.14, N = 312.19

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamUbuntu 22.102004006008001000SE +/- 6.80, N = 3964.44

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.103691215SE +/- 0.03, N = 311.05

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamUbuntu 22.1020406080100SE +/- 0.21, N = 390.48

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 65972.61, N = 153124943.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 38170.43, N = 43127334.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Ubuntu 22.10700K1400K2100K2800K3500KSE +/- 41614.46, N = 33456218.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

Test: MMAP

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPUbuntu 22.10160320480640800SE +/- 1.45, N = 3742.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: NUMA

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAUbuntu 22.10150300450600750SE +/- 1.84, N = 3681.801. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Futex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexUbuntu 22.10800K1600K2400K3200K4000KSE +/- 33712.97, N = 153538590.311. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: MEMFD

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDUbuntu 22.10400800120016002000SE +/- 18.36, N = 32049.371. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Mutex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexUbuntu 22.104M8M12M16M20MSE +/- 110264.57, N = 1516748823.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Atomic

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicUbuntu 22.1070K140K210K280K350KSE +/- 7871.09, N = 15344361.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Crypto

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoUbuntu 22.109K18K27K36K45KSE +/- 294.46, N = 1542378.791. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Malloc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocUbuntu 22.108M16M24M32M40MSE +/- 149024.22, N = 336241645.591. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Forking

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingUbuntu 22.1020K40K60K80K100KSE +/- 721.90, N = 3113514.431. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: IO_uring

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringUbuntu 22.106K12K18K24K30KSE +/- 56.03, N = 327676.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: SENDFILE

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEUbuntu 22.10130K260K390K520K650KSE +/- 3074.02, N = 3588014.941. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: CPU Cache

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheUbuntu 22.1020406080100SE +/- 1.23, N = 1598.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: CPU Stress

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressUbuntu 22.1011K22K33K44K55KSE +/- 538.64, N = 351634.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Semaphores

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresUbuntu 22.10800K1600K2400K3200K4000KSE +/- 1451.53, N = 33538392.411. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Matrix Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathUbuntu 22.1020K40K60K80K100KSE +/- 588.05, N = 3109789.421. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathUbuntu 22.1030K60K90K120K150KSE +/- 966.54, N = 9119832.031. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: x86_64 RdRand

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandUbuntu 22.1020K40K60K80K100KSE +/- 9.58, N = 382767.761. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Memory Copying

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingUbuntu 22.1016003200480064008000SE +/- 10.70, N = 37385.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Socket Activity

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityUbuntu 22.105K10K15K20K25KSE +/- 568.16, N = 1224287.071. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Context Switching

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingUbuntu 22.103M6M9M12M15MSE +/- 181309.12, N = 414703175.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Glibc C String Functions

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsUbuntu 22.10900K1800K2700K3600K4500KSE +/- 40661.15, N = 154307014.941. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Glibc Qsort Data Sorting

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingUbuntu 22.1090180270360450SE +/- 0.68, N = 3411.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: System V Message Passing

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingUbuntu 22.103M6M9M12M15MSE +/- 181986.71, N = 313432520.511. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

spaCy

Model: en_core_web_lg

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgUbuntu 22.104K8K12K16K20KSE +/- 31.07, N = 320855

spaCy

Model: en_core_web_trf

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfUbuntu 22.105001000150020002500SE +/- 23.73, N = 32523

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyUbuntu 22.101224364860SE +/- 0.16, N = 351.61

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyUbuntu 22.10306090120150SE +/- 0.34, N = 3147.64

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyUbuntu 22.1020406080100SE +/- 0.11, N = 375.32

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyUbuntu 22.10120240360480600SE +/- 0.42, N = 3576.35

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyUbuntu 22.104080120160200SE +/- 0.13, N = 3178.95

ctx_clock

Context Switch Time

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeUbuntu 22.10306090120150SE +/- 0.00, N = 3132

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.101.1432.2863.4294.5725.715SE +/- 0.01, N = 35.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 22.1030060090012001500SE +/- 3.93, N = 31570.46MIN: 1396.06 / MAX: 1856.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.100.80551.6112.41653.2224.0275SE +/- 0.01, N = 33.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUUbuntu 22.105001000150020002500SE +/- 6.08, N = 32222.92MIN: 1682.82 / MAX: 2975.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.100.79881.59762.39643.19523.994SE +/- 0.02, N = 33.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUUbuntu 22.105001000150020002500SE +/- 5.80, N = 32238.02MIN: 1692.38 / MAX: 2991.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.1080160240320400SE +/- 1.93, N = 3372.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 22.10510152025SE +/- 0.11, N = 321.43MIN: 12.24 / MAX: 94.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 318.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUUbuntu 22.1090180270360450SE +/- 0.21, N = 3438.23MIN: 270.29 / MAX: 1085.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.102004006008001000SE +/- 0.85, N = 3916.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.10246810SE +/- 0.01, N = 38.72MIN: 5.93 / MAX: 54.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.10100200300400500SE +/- 0.50, N = 3468.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUUbuntu 22.101224364860SE +/- 0.05, N = 351.14MIN: 22.47 / MAX: 182.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.101428425670SE +/- 0.09, N = 363.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.10306090120150SE +/- 0.17, N = 3125.97MIN: 91.09 / MAX: 325.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.10400800120016002000SE +/- 1.15, N = 31638.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.1048121620SE +/- 0.01, N = 314.63MIN: 6.68 / MAX: 120.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.10160320480640800SE +/- 2.34, N = 3728.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.103691215SE +/- 0.03, N = 310.96MIN: 7.62 / MAX: 52.61. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.103K6K9K12K15KSE +/- 10.91, N = 314593.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUUbuntu 22.100.3690.7381.1071.4761.845SE +/- 0.00, N = 31.64MIN: 0.87 / MAX: 9.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.107K14K21K28K35KSE +/- 40.02, N = 333018.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.100.1620.3240.4860.6480.81SE +/- 0.00, N = 30.72MIN: 0.42 / MAX: 4.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

IndigoBench

Acceleration: CPU - Scene: Bedroom

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomUbuntu 22.100.91151.8232.73453.6464.5575SE +/- 0.040, N = 34.051

IndigoBench

Acceleration: CPU - Scene: Supercar

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarUbuntu 22.103691215SE +/- 0.02, N = 311.80

PyBench

Total For Average Test Times

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesUbuntu 22.10100200300400500SE +/- 0.33, N = 3474

PyPerformance

Benchmark: go

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goUbuntu 22.10306090120150SE +/- 0.33, N = 3112

PyPerformance

Benchmark: 2to3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ubuntu 22.10306090120150SE +/- 0.58, N = 3158

PyPerformance

Benchmark: chaos

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosUbuntu 22.101020304050SE +/- 0.13, N = 344.9

PyPerformance

Benchmark: float

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatUbuntu 22.101122334455SE +/- 0.10, N = 346.7

PyPerformance

Benchmark: nbody

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyUbuntu 22.101428425670SE +/- 0.32, N = 361.4

PyPerformance

Benchmark: pathlib

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibUbuntu 22.10246810SE +/- 0.02, N = 38.77

PyPerformance

Benchmark: raytrace

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceUbuntu 22.1050100150200250SE +/- 0.88, N = 3208

PyPerformance

Benchmark: json_loads

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsUbuntu 22.103691215SE +/- 0.00, N = 311.5

PyPerformance

Benchmark: crypto_pyaes

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesUbuntu 22.101122334455SE +/- 0.15, N = 350.7

PyPerformance

Benchmark: regex_compile

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileUbuntu 22.1020406080100SE +/- 0.30, N = 382.1

PyPerformance

Benchmark: python_startup

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 22.10246810SE +/- 0.04, N = 37.39

PyPerformance

Benchmark: django_template

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateUbuntu 22.10510152025SE +/- 0.06, N = 322.2

PyPerformance

Benchmark: pickle_pure_python

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonUbuntu 22.104080120160200SE +/- 0.67, N = 3197

Natron

Input: Spaceship

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipUbuntu 22.10246810SE +/- 0.03, N = 36.7

ONNX Runtime

Model: GPT-2 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardUbuntu 22.102K4K6K8K10KSE +/- 19.87, N = 3105781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: yolov4 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardUbuntu 22.10150300450600750SE +/- 0.29, N = 36871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: bertsquad-12 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardUbuntu 22.1030060090012001500SE +/- 0.60, N = 312101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardUbuntu 22.10306090120150SE +/- 0.17, N = 31331. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardUbuntu 22.10130260390520650SE +/- 0.17, N = 35981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ONNX Runtime

Model: super-resolution-10 - Device: CPU - Executor: Standard

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardUbuntu 22.1015003000450060007500SE +/- 2.84, N = 368421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Appleseed

Scene: Emily

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyUbuntu 22.104080120160200164.94

Appleseed

Scene: Disney Material

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialUbuntu 22.102040608010084.86

Appleseed

Scene: Material Tester

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterUbuntu 22.102040608010090.78

PHPBench

PHP Benchmark Suite

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteUbuntu 22.10300K600K900K1200K1500KSE +/- 2367.31, N = 31617596

EnCodec

Target Bandwidth: 3 kbps

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsUbuntu 22.10510152025SE +/- 0.17, N = 319.18

EnCodec

Target Bandwidth: 6 kbps

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsUbuntu 22.10510152025SE +/- 0.23, N = 319.30

EnCodec

Target Bandwidth: 24 kbps

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsUbuntu 22.10510152025SE +/- 0.24, N = 321.86

EnCodec

Target Bandwidth: 1.5 kbps

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsUbuntu 22.10510152025SE +/- 0.18, N = 318.55

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of State

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateUbuntu 22.100.00020.00040.00060.00080.001SE +/- 0.000, N = 150.001

PyHPC Benchmarks

Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral Mixing

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingUbuntu 22.100.00090.00180.00270.00360.0045SE +/- 0.000, N = 150.004

Chaos Group V-RAY

Mode: CPU

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUUbuntu 22.106K12K18K24K30KSE +/- 190.70, N = 328734

CloudSuite Graph Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite Graph AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 63.87, N = 39985

CloudSuite In-Memory Analytics

OpenBenchmarking.orgms, Fewer Is BetterCloudSuite In-Memory AnalyticsUbuntu 22.102K4K6K8K10KSE +/- 47.35, N = 310160

nginx

Connections: 100

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Ubuntu 22.1040K80K120K160K200KSE +/- 1164.13, N = 3204910.461. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 200

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Ubuntu 22.1040K80K120K160K200KSE +/- 636.13, N = 3205841.241. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 22.1040K80K120K160K200KSE +/- 492.12, N = 3203069.781. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Ubuntu 22.1040K80K120K160K200KSE +/- 624.40, N = 3192021.951. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 22.1090180270360450SE +/- 0.12, N = 3414.591. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 22.1048121620SE +/- 0.00, N = 316.581. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 22.10120240360480600SE +/- 1.24, N = 3570.621. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 22.10510152025SE +/- 0.05, N = 322.831. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm


Phoronix Test Suite v10.8.4