Xeon Gold Cascade Lake Refresh LTS Linux Benchmarks

Intel Xeon Gold 6226R testing with a Supermicro X11SPL-F v1.02 (3.1 BIOS) and ASPEED on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2212171-PTS-XEONGOLD08
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 3 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 6 Tests
C/C++ Compiler Tests 9 Tests
Compression Tests 2 Tests
CPU Massive 17 Tests
Creator Workloads 17 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Database Test Suite 4 Tests
Encoding 5 Tests
Game Development 3 Tests
HPC - High Performance Computing 13 Tests
Imaging 5 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 8 Tests
Molecular Dynamics 2 Tests
Multi-Core 21 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 7 Tests
Python Tests 9 Tests
Renderers 3 Tests
Scientific Computing 2 Tests
Server 4 Tests
Server CPU Tests 10 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Linux 5.10.130
December 14 2022
  1 Day, 44 Minutes
Linux 5.15.83
December 15 2022
  1 Day, 1 Hour, 38 Minutes
Linux 6.1
December 16 2022
  1 Day, 2 Hours, 3 Minutes
Invert Hiding All Results Option
  1 Day, 1 Hour, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Gold Cascade Lake Refresh LTS Linux BenchmarksOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon Gold 6226R @ 3.90GHz (16 Cores / 32 Threads)Supermicro X11SPL-F v1.02 (3.1 BIOS)Intel Sky Lake-E DMI3 Registers192GB280GB INTEL SSDPED1D280GAASPEEDVE2282 x Intel I210Ubuntu 20.105.10.130-0510130-generic (x86_64)5.15.83-051583-generic (x86_64)6.1.0-phx (x86_64)GNOME Shell 3.38.1X Server 1.20.9GCC 10.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelsDesktopDisplay ServerCompilerFile-SystemScreen ResolutionXeon Gold Cascade Lake Refresh LTS Linux Benchmarks PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-poYruo/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-poYruo/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_cpufreq performance - CPU Microcode: 0x5003102- Python 3.8.10- Linux 5.10.130: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Vulnerable: Clear buffers attempted no microcode; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Vulnerable: eIBRS with unprivileged eBPF + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled - Linux 5.15.83: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Vulnerable: Clear buffers attempted no microcode; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled - Linux 6.1: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Vulnerable: Clear buffers attempted no microcode; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

Linux 5.10.130Linux 5.15.83Linux 6.1Result OverviewPhoronix Test Suite100%107%115%122%130%ctx_clockDragonflydbJPEG XL Decoding libjxlEnCodecPostgreSQLGraphicsMagickC-BloscStress-NGAOM AV1JPEG XL libjxlFacebook RocksDBTimed Linux Kernel CompilationStargate Digital Audio WorkstationspaCyClickHouseTimed Erlang/OTP CompilationTimed Godot Game Engine Compilationlibavif avifencMobile Neural NetworkTimed Node.js CompilationFLAC Audio EncodingTimed PHP CompilationminiBUDETimed CPython Compilation7-Zip CompressionNCNNBRL-CADNeural Magic DeepSparsesrsRANSVT-AV1nekRSOpenVINOY-CruncherXmrigCpuminer-OptWebP Image EncodeOpenRadiossLAMMPS Molecular Dynamics SimulatorTensorFlowNumenta Anomaly BenchmarkBlenderoneDNNASTC EncoderPrimesieveAircrack-ngOSPRay StudioOpenFOAMNatron

Xeon Gold Cascade Lake Refresh LTS Linux Benchmarksstress-ng: Context Switchingstress-ng: System V Message Passingstress-ng: MEMFDstress-ng: Mallocstress-ng: NUMActx-clock: Context Switch Timepgbench: 100 - 500 - Read Onlypgbench: 100 - 500 - Read Only - Average Latencystress-ng: MMAPstress-ng: Mutexstress-ng: SENDFILEpgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencyjpegxl-decode: Allpgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writegraphics-magick: Rotatedragonflydb: 50 - 1:5pgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlystress-ng: Forkinggraphics-magick: HWB Color Spacestress-ng: IO_uringdragonflydb: 200 - 1:5dragonflydb: 200 - 1:1dragonflydb: 50 - 5:1encodec: 6 kbpsrocksdb: Rand Fill Syncdragonflydb: 200 - 5:1encodec: 1.5 kbpsencodec: 24 kbpsencodec: 3 kbpsrocksdb: Seq Fillonednn: Deconvolution Batch shapes_1d - f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 4Kpgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencystargate: 192000 - 1024jpegxl: PNG - 90rocksdb: Update Randjpegxl: JPEG - 80jpegxl: JPEG - 90aom-av1: Speed 10 Realtime - Bosphorus 4Krocksdb: Rand Fillblosc: blosclz shufflejpegxl-decode: 1aom-av1: Speed 8 Realtime - Bosphorus 4Kpgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencyjpegxl: PNG - 80spacy: en_core_web_trfavifenc: 10, Losslessstargate: 44100 - 512graphics-magick: Noise-Gaussianclickhouse: 100M Rows Web Analytics Dataset, Second Runncnn: CPU - blazefaceblosc: blosclz bitshufflerocksdb: Read While Writingclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cachebuild-linux-kernel: defconfigdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamstargate: 96000 - 512stress-ng: Semaphoresstargate: 192000 - 512mnn: resnet-v2-50build-linux-kernel: allmodconfigstargate: 44100 - 1024ncnn: CPU - resnet50mnn: MobileNetV2_224mnn: squeezenetv1.1numenta-nab: KNN CADgraphics-magick: Resizingdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streambuild-erlang: Time To Compileonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamospray-studio: 3 - 1080p - 32 - Path Tracerncnn: CPU - yolov4-tinygraphics-magick: Swirlbuild-godot: Time To Compilencnn: CPU - squeezenet_ssdsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsvt-av1: Preset 12 - Bosphorus 1080pstargate: 96000 - 1024deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUjpegxl: JPEG - 100stargate: 480000 - 512jpegxl: PNG - 100numenta-nab: Relative Entropysvt-av1: Preset 13 - Bosphorus 4Kncnn: CPU - mobilenetavifenc: 6, Losslessopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUstress-ng: Memory Copyingstargate: 480000 - 1024build-python: Defaultavifenc: 6mnn: nasnetcompress-7zip: Compression Ratingncnn: CPU - FastestDetmnn: SqueezeNetV1.0srsran: OFDM_Testmnn: inception-v3ospray-studio: 2 - 4K - 16 - Path Tracertensorflow: CPU - 16 - GoogLeNetospray-studio: 2 - 1080p - 32 - Path Tracerdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streammnn: mobilenet-v1-1.0cpuminer-opt: Myriad-Groestlbuild-nodejs: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUencode-flac: WAV To FLACnumenta-nab: Bayesian Changepointsvt-av1: Preset 13 - Bosphorus 1080pgraphics-magick: Enhancedopenvino: Vehicle Detection FP16-INT8 - CPUdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamospray-studio: 1 - 4K - 16 - Path Tracerbuild-php: Time To Compilencnn: CPU - shufflenet-v2openvino: Person Vehicle Bike Detection FP16 - CPUncnn: CPU - mnasnetopenvino: Person Vehicle Bike Detection FP16 - CPUdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamclickhouse: 100M Rows Web Analytics Dataset, Third Runopenvino: Vehicle Detection FP16-INT8 - CPUtensorflow: CPU - 32 - GoogLeNetminibude: OpenMP - BM1minibude: OpenMP - BM1cpuminer-opt: Ringcoincpuminer-opt: Blake-2 Sminibude: OpenMP - BM2minibude: OpenMP - BM2tensorflow: CPU - 16 - ResNet-50onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMaom-av1: Speed 6 Two-Pass - Bosphorus 4Ksrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMgraphics-magick: Sharpenopenfoam: drivaerFastback, Small Mesh Size - Execution Timencnn: CPU - efficientnet-b0ospray-studio: 1 - 1080p - 32 - Path Tracersrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMopenvino: Age Gender Recognition Retail 0013 FP16 - CPUavifenc: 2mnn: mobilenetV3ncnn: CPU - googlenetsvt-av1: Preset 4 - Bosphorus 4Kopenradioss: Bumper Beamxmrig: Monero - 1Mopenvino: Person Detection FP16 - CPUbrl-cad: VGR Performance Metricnumenta-nab: Earthgecko Skylinedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsvt-av1: Preset 4 - Bosphorus 1080psrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsvt-av1: Preset 8 - Bosphorus 4Kncnn: CPU - regnety_400mstress-ng: Atomiccpuminer-opt: LBC, LBRY Creditssvt-av1: Preset 12 - Bosphorus 4Kopenfoam: drivaerFastback, Medium Mesh Size - Execution Timedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamopenvino: Weld Porosity Detection FP16 - CPUwebp: Quality 100, Losslesssrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamopenvino: Weld Porosity Detection FP16 - CPUopenfoam: drivaerFastback, Small Mesh Size - Mesh Timewebp: Quality 100openvino: Age Gender Recognition Retail 0013 FP16 - CPUspacy: en_core_web_lgcpuminer-opt: Quad SHA-256, Pyriteonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU-v2-v2 - mobilenet-v2openradioss: Rubber O-Ring Seal Installationopenradioss: INIVOL and Fluid Structure Interaction Drop Containerwebp: Quality 100, Highest Compressiony-cruncher: 500Mopenvino: Person Detection FP16 - CPUcpuminer-opt: Triple SHA-256, Onecoinonednn: Recurrent Neural Network Inference - u8s8f32 - CPUstress-ng: CPU Stressnekrs: TurboPipe Periodicopenvino: Person Detection FP32 - CPUnumenta-nab: Windowed Gaussianopenvino: Person Detection FP32 - CPUncnn: CPU - vision_transformeronednn: Convolution Batch Shapes Auto - f32 - CPUopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timecpuminer-opt: Skeincointensorflow: CPU - 64 - GoogLeNetcpuminer-opt: Deepcoinstress-ng: Glibc C String Functionsbuild-python: Released Build, PGO + LTO Optimizedblender: Classroom - CPU-Onlyonednn: IP Shapes 3D - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMlammps: Rhodopsin Proteinsvt-av1: Preset 8 - Bosphorus 1080pcpuminer-opt: x25xy-cruncher: 1Btensorflow: CPU - 32 - ResNet-50deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUcpuminer-opt: scryptxmrig: Wownero - 1Mcpuminer-opt: Magitensorflow: CPU - 64 - AlexNettensorflow: CPU - 64 - ResNet-50deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamrocksdb: Rand Readopenvino: Face Detection FP16-INT8 - CPUncnn: CPU - alexnettensorflow: CPU - 16 - AlexNetcompress-7zip: Decompression Ratingcpuminer-opt: Garlicoindeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUopenvino: Face Detection FP16-INT8 - CPUblender: BMW27 - CPU-Onlysrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamtensorflow: CPU - 256 - ResNet-50stress-ng: Glibc Qsort Data Sortingdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamnumenta-nab: Contextual Anomaly Detector OSEdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamlammps: 20k Atomsdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamospray-studio: 1 - 1080p - 16 - Path Tracerospray-studio: 3 - 1080p - 1 - Path Traceravifenc: 0ospray-studio: 3 - 4K - 32 - Path Tracertensorflow: CPU - 256 - AlexNetopenradioss: Bird Strike on Windshieldwebp: Defaultospray-studio: 2 - 4K - 32 - Path Tracertensorflow: CPU - 256 - GoogLeNetrocksdb: Read Rand Write Randtensorflow: CPU - 32 - AlexNetprimesieve: 1e13onednn: Recurrent Neural Network Training - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUastcenc: Thoroughopenvino: Face Detection FP16 - CPUblender: Pabellon Barcelona - CPU-Onlysrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMonednn: IP Shapes 1D - f32 - CPUospray-studio: 2 - 1080p - 16 - Path Tracerprimesieve: 1e12onednn: IP Shapes 1D - u8s8f32 - CPUospray-studio: 1 - 4K - 1 - Path Tracerstress-ng: Vector Mathopenvino: Weld Porosity Detection FP16-INT8 - CPUospray-studio: 2 - 1080p - 1 - Path Traceropenradioss: Cell Phone Drop Testospray-studio: 1 - 4K - 32 - Path Tracerblender: Barbershop - CPU-Onlyopenvino: Weld Porosity Detection FP16-INT8 - CPUastcenc: Mediumospray-studio: 3 - 4K - 16 - Path Tracerstress-ng: Cryptoopenvino: Face Detection FP16 - CPUastcenc: Exhaustiveastcenc: Faststress-ng: Matrix Mathospray-studio: 3 - 1080p - 16 - Path Tracerblender: Fishy Cat - CPU-Onlyospray-studio: 3 - 4K - 1 - Path Tracerospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 2 - 4K - 1 - Path Tracerstress-ng: x86_64 RdRandaircrack-ng: natron: Spaceshipncnn: CPU-v3-v3 - mobilenet-v3webp: Quality 100, Lossless, Highest Compressionncnn: CPU - resnet18ncnn: CPU - vgg16stress-ng: Socket Activitystress-ng: CPU Cachestress-ng: Futexdragonflydb: 50 - 1:1Linux 5.10.130Linux 5.15.83Linux 6.15500706.766490812.351008.6222392194.28358.051543702381.350390.208606774.99241610.684139490.604208.7912.986385037571047699.230.21546542256719.5197636489.151021149.38971725.25989148.9729.495216464918617.3728.20834.16229.14112042554.7069948.05476765.2441.8892687.586256957.457.3048.3498319915238.138.1737.88536561.8647.7024466.6373.909552335212.602.3810158.93100899205.7670.16926.001138.44382.5378913214994.831.53625510.148907.7484.38589314.203.8552.592185.330133373.603913.577191.5600.851455144.006255.54337801623.6050297.24617.13303.4415.1362.99961122.839643.76120.5029.37272.090.533.7269330.5415.803120.79414.5411.37631167.814201.024.19956819.4496.68914.9101063577.024.89212200000022.733124428114.416682310.54032.67817363453.613752.831964.82123.3421.11041.392423.649315969.337.526512146765.3255.66485.364.9116.47132.6485212.428.24121.72530.26621.2112833.09114910321.389534.73837.684.1586910.5210279.811.70117.3199188.282087.0665271131.01.0470.5172.09111.572.475152.416794.53.31167771114.813758.02770.3132387.046126.934.51918.83223731.7096820126.4971598.6991104.1622625.501.27279.8307.09.599625.5338.535119.0015259.86113142188300.9563995.56159.75443.723.0711.3452400.02315913911.81634273.52706497666673.259.0702436.30160.274.29153241.48551135103125.7914497911917.94383.514318.283.101443.30606123.310.65995.493640.0624.44940.23103.99609.61531621.84491.909492.2374.94243.3041.90108.547573.675965094239417.715.46166.55864326572.8186.6567909.6321.2527119.11106.1398.6219.3569144.241013.268943.50194.3675.261655.423355.73436.439010.42792.2142284092178145.012284699278.63256.4314.37239935130.212132608211.25174.7221613.120.5406549.32255.88373.7465.92.296362917214.4430.496613705555351.588.02182696.212342161145.901991.7572.159514658318742.671357.370.9371193.131555915.3634786144.25863317787242252390.5650605.5593.64.950.488.0926.9510160.91116.331498828.70978963.312284154.168257907.06879.6720930090.96335.581794264281.173381.239013626.15285766.994751750.526209.1814.783338247311186278.250.19451527360087.3695732758.891117321.901060251.051068662.9129.530234969993337.9028.34933.91728.66011938374.7083447.66459225.4441.9896807.546166267.407.2548.3398284815998.637.9037.86562301.7797.7124166.6663.972591331209.582.2910555.73043279204.2670.48826.477437.76022.5790453314461.381.5670499.850911.0834.43109814.213.7522.537182.657132172.055213.867991.7540.846543146.950554.43327799723.8949997.56816.85310.0418.0223.06351123.313942.90130.529.29272.800.533.7903210.5415.813122.97814.7111.42531007.384129.344.27089719.4936.74914.8801080586.914.89712263333322.438124556115.156681710.43392.63917610453.307760.837464.28124.3421.09841.433425.263314957.087.440812148665.1435.59485.194.8516.46134.1326215.028.34122.18529.24021.1702812.34115058321.317532.92438.094.2034310.4539282.711.77118.3199186.398436.9965242131.61.0371.0652.07111.462.483151.826829.83.30166852115.529759.35250.3160056.985128.034.81818.67225144.3396663127.5341611.5943104.0054623.361.28282.0307.99.614125.6238.2357819.0215258.56112362184930.9616825.53159.15445.173.0911.3662401.74313917908.18834238.77709236333333.259.0182441.67161.234.31178242.25487134833126.2914500916397.13383.962317.073.085613.29633123.910.63995.043641.6124.56240.41104.03769.61111614.87489.839531.3373.55242.7342.05108.337773.821665233649416.955.48166.90867356594.1886.9402912.0661.2517419.15106.3898.6219.0088143.806513.308943.41194.8875.040155.584955.65936.497310.41491.9860284172176144.709284853279.05256.1914.40240311130.402129359211.47174.6581611.140.5404529.32675.88373.5466.02.293852920614.4290.497248704655415.018.01182696.312341421144.821993.5872.121914657018733.101356.350.9375193.024455890.2334801144.34862817787246252393.8850607.7383.64.950.488.0527.6917786.17128.312364278.751123953.552054658.6414896082.79667.8215989079.77256.612004671671.070310.227375258.06284966.044874680.513181.5413.847361096681180450.090.19152378853632.6787232918.801099078.401046604.151074996.4332.017218454979549.4330.49336.32130.42611349914.9806745.41452235.5281.9288147.205944857.086.9445.9793528515256.136.3736.10543651.8397.3623416.9093.817268322204.432.3610161.32985392198.5172.55826.865637.21152.4971643315660.881.5200409.932934.2654.31138213.833.8312.606180.519129971.885213.901393.7390.831866147.380454.27267980423.3549199.42016.76307.4409.1273.01258523.071843.32020.5129.86267.620.523.7188120.5315.525121.59614.4511.58030632.464181.804.20888019.7776.79815.1221069686.964.96912073333322.789126341116.166783710.38322.64017477459.753763.306463.96124.9721.37641.936419.750311957.417.535912300065.9645.63491.244.8916.27132.5077212.928.33123.19535.56521.4222800.62113755021.552538.79038.014.1809610.4097280.211.65117.1197186.934677.0465894130.31.0471.2012.08311.482.460153.236767.33.28168334115.831764.74250.3139757.024127.234.72318.70225639.3296010126.7631607.4012104.8348620.611.27280.3305.59.539325.7338.3619938.9515144.32112312172370.9634005.57160.27446.663.0811.2942415.31315723913.94334059.43704815000003.239.0732451.10160.394.28618242.92307134320126.4914420916843.52385.5318.713.094333.31244123.310.60895.151638.6424.45540.32103.58239.65321618.86491.859527.7373.45242.3442.06108.744373.545665337175416.165.46166.31866806571.6686.8832909.1471.2487119.17106.4698.3218.6936144.211413.269143.37194.3075.263555.436655.81536.539210.39992.0462284792173145.036284235279.22255.8914.38240430130.472133528211.06174.9771614.070.5414309.31035.89374.1765.92.297082916514.4240.496776704855344.328.02182496.302343711145.181992.2372.097214669218727.611357.100.9368192.991755875.1834779144.29863317797245252372.2950609.2763.64.950.487.8526.5216990.75115.592433807.141130454.03OpenBenchmarking.org

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingLinux 5.10.130Linux 5.15.83Linux 6.11.2M2.4M3.6M4.8M6MSE +/- 72191.04, N = 3SE +/- 7040.94, N = 3SE +/- 3492.53, N = 35500706.762284154.162054658.641. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingLinux 5.10.130Linux 5.15.83Linux 6.11000K2000K3000K4000K5000KMin: 5373588.8 / Avg: 5500706.76 / Max: 5623556.62Min: 2274662.3 / Avg: 2284154.16 / Max: 2297908.59Min: 2050421.46 / Avg: 2054658.64 / Max: 2061586.381. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingLinux 5.10.130Linux 5.15.83Linux 6.13M6M9M12M15MSE +/- 23387.52, N = 3SE +/- 4399.60, N = 3SE +/- 26770.64, N = 36490812.358257907.0614896082.791. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingLinux 5.10.130Linux 5.15.83Linux 6.13M6M9M12M15MMin: 6447860.45 / Avg: 6490812.35 / Max: 6528328.21Min: 8249334.09 / Avg: 8257907.06 / Max: 8263910.41Min: 14845730.63 / Avg: 14896082.79 / Max: 14937022.631. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDLinux 5.10.130Linux 5.15.83Linux 6.12004006008001000SE +/- 0.88, N = 3SE +/- 0.39, N = 3SE +/- 0.28, N = 31008.62879.67667.821. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDLinux 5.10.130Linux 5.15.83Linux 6.12004006008001000Min: 1007.17 / Avg: 1008.62 / Max: 1010.2Min: 878.89 / Avg: 879.67 / Max: 880.13Min: 667.28 / Avg: 667.82 / Max: 668.191. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocLinux 5.10.130Linux 5.15.83Linux 6.15M10M15M20M25MSE +/- 90978.10, N = 3SE +/- 130302.14, N = 3SE +/- 91731.80, N = 322392194.2820930090.9615989079.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocLinux 5.10.130Linux 5.15.83Linux 6.14M8M12M16M20MMin: 22218615.87 / Avg: 22392194.28 / Max: 22526248.15Min: 20670365.41 / Avg: 20930090.96 / Max: 21078471.95Min: 15885717.96 / Avg: 15989079.77 / Max: 16172029.181. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMALinux 5.10.130Linux 5.15.83Linux 6.180160240320400SE +/- 2.82, N = 3SE +/- 0.90, N = 3SE +/- 1.78, N = 3358.05335.58256.611. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMALinux 5.10.130Linux 5.15.83Linux 6.160120180240300Min: 354.56 / Avg: 358.05 / Max: 363.62Min: 334.26 / Avg: 335.58 / Max: 337.3Min: 254.57 / Avg: 256.61 / Max: 260.151. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeLinux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 0.67, N = 3154179200
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 178 / Avg: 178.67 / Max: 180

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyLinux 5.10.130Linux 5.15.83Linux 6.1100K200K300K400K500KSE +/- 956.06, N = 3SE +/- 666.01, N = 3SE +/- 757.53, N = 33702384264284671671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyLinux 5.10.130Linux 5.15.83Linux 6.180K160K240K320K400KMin: 369146.07 / Avg: 370238.36 / Max: 372143.66Min: 425223.9 / Avg: 426428.1 / Max: 427523.3Min: 465829.56 / Avg: 467166.81 / Max: 468452.21. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.10.30380.60760.91141.21521.519SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 31.3501.1731.0701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 1.34 / Avg: 1.35 / Max: 1.35Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 1.07 / Avg: 1.07 / Max: 1.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPLinux 5.10.130Linux 5.15.83Linux 6.180160240320400SE +/- 0.74, N = 3SE +/- 0.56, N = 3SE +/- 2.76, N = 3390.20381.23310.221. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPLinux 5.10.130Linux 5.15.83Linux 6.170140210280350Min: 388.73 / Avg: 390.2 / Max: 391.11Min: 380.23 / Avg: 381.23 / Max: 382.15Min: 304.73 / Avg: 310.22 / Max: 313.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexLinux 5.10.130Linux 5.15.83Linux 6.12M4M6M8M10MSE +/- 10182.54, N = 3SE +/- 112044.15, N = 15SE +/- 24065.62, N = 38606774.999013626.157375258.061. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexLinux 5.10.130Linux 5.15.83Linux 6.11.6M3.2M4.8M6.4M8MMin: 8595011.27 / Avg: 8606774.99 / Max: 8627053.5Min: 8407175.79 / Avg: 9013626.15 / Max: 9613737.32Min: 7337092.63 / Avg: 7375258.06 / Max: 7419737.861. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILELinux 5.10.130Linux 5.15.83Linux 6.160K120K180K240K300KSE +/- 2449.06, N = 3SE +/- 2143.98, N = 3SE +/- 1947.11, N = 3241610.68285766.99284966.041. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILELinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KMin: 236713.28 / Avg: 241610.68 / Max: 244131.63Min: 281479.56 / Avg: 285766.99 / Max: 287968.72Min: 281071.94 / Avg: 284966.04 / Max: 286938.11. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyLinux 5.10.130Linux 5.15.83Linux 6.1100K200K300K400K500KSE +/- 2810.03, N = 3SE +/- 286.86, N = 3SE +/- 5532.44, N = 34139494751754874681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyLinux 5.10.130Linux 5.15.83Linux 6.180K160K240K320K400KMin: 410754.06 / Avg: 413948.98 / Max: 419550.58Min: 474846.05 / Avg: 475174.96 / Max: 475746.51Min: 480940.25 / Avg: 487467.94 / Max: 498469.081. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.10.13590.27180.40770.54360.6795SE +/- 0.004, N = 3SE +/- 0.000, N = 3SE +/- 0.006, N = 30.6040.5260.5131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.6 / Avg: 0.6 / Max: 0.61Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.5 / Avg: 0.51 / Max: 0.521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllLinux 5.10.130Linux 5.15.83Linux 6.150100150200250SE +/- 1.38, N = 3SE +/- 0.37, N = 3SE +/- 2.09, N = 3208.79209.18181.54
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 206.98 / Avg: 208.79 / Max: 211.51Min: 208.63 / Avg: 209.18 / Max: 209.88Min: 177.45 / Avg: 181.54 / Max: 184.33

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 312.9914.7813.851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 12.95 / Avg: 12.99 / Max: 13.02Min: 14.7 / Avg: 14.78 / Max: 14.85Min: 13.78 / Avg: 13.85 / Max: 13.891. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read WriteLinux 5.10.130Linux 5.15.83Linux 6.18K16K24K32K40KSE +/- 59.40, N = 3SE +/- 99.01, N = 3SE +/- 91.02, N = 33850333824361091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 500 - Mode: Read WriteLinux 5.10.130Linux 5.15.83Linux 6.17K14K21K28K35KMin: 38415.65 / Avg: 38503.16 / Max: 38616.5Min: 33666.11 / Avg: 33824.27 / Max: 34006.55Min: 35997.71 / Avg: 36109.05 / Max: 36289.451. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateLinux 5.10.130Linux 5.15.83Linux 6.1160320480640800SE +/- 9.97, N = 5SE +/- 1.76, N = 3SE +/- 5.43, N = 157577316681. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateLinux 5.10.130Linux 5.15.83Linux 6.1130260390520650Min: 722 / Avg: 757.2 / Max: 782Min: 728 / Avg: 730.67 / Max: 734Min: 623 / Avg: 668.2 / Max: 7031. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5Linux 5.10.130Linux 5.15.83Linux 6.1300K600K900K1200K1500KSE +/- 872.80, N = 3SE +/- 1515.54, N = 3SE +/- 1580.99, N = 31047699.231186278.251180450.091. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 1046057.61 / Avg: 1047699.23 / Max: 1049034.01Min: 1183339.92 / Avg: 1186278.25 / Max: 1188391.8Min: 1177772.37 / Avg: 1180450.09 / Max: 1183245.31. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.10.04840.09680.14520.19360.242SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.2150.1940.1911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.112345Min: 0.21 / Avg: 0.22 / Max: 0.22Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.19 / Avg: 0.19 / Max: 0.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyLinux 5.10.130Linux 5.15.83Linux 6.1110K220K330K440K550KSE +/- 1779.96, N = 3SE +/- 92.73, N = 3SE +/- 1990.45, N = 34654225152735237881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyLinux 5.10.130Linux 5.15.83Linux 6.190K180K270K360K450KMin: 463548.55 / Avg: 465422.17 / Max: 468980.41Min: 515093.95 / Avg: 515273.2 / Max: 515404.05Min: 521522.01 / Avg: 523787.77 / Max: 527755.331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingLinux 5.10.130Linux 5.15.83Linux 6.113K26K39K52K65KSE +/- 175.99, N = 3SE +/- 184.20, N = 3SE +/- 386.64, N = 356719.5160087.3653632.671. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingLinux 5.10.130Linux 5.15.83Linux 6.110K20K30K40K50KMin: 56392.77 / Avg: 56719.51 / Max: 56996.24Min: 59728.25 / Avg: 60087.36 / Max: 60338.12Min: 53016.12 / Avg: 53632.67 / Max: 54345.151. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceLinux 5.10.130Linux 5.15.83Linux 6.12004006008001000SE +/- 1.53, N = 3SE +/- 1.67, N = 39769578721. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceLinux 5.10.130Linux 5.15.83Linux 6.12004006008001000Min: 955 / Avg: 957 / Max: 960Min: 870 / Avg: 871.67 / Max: 8751. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringLinux 5.10.130Linux 5.15.83Linux 6.18K16K24K32K40KSE +/- 401.23, N = 3SE +/- 501.84, N = 3SE +/- 240.39, N = 336489.1532758.8932918.801. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringLinux 5.10.130Linux 5.15.83Linux 6.16K12K18K24K30KMin: 35760.18 / Avg: 36489.15 / Max: 37144.18Min: 31868.84 / Avg: 32758.89 / Max: 33605.66Min: 32439.24 / Avg: 32918.8 / Max: 33188.21. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KSE +/- 515.57, N = 3SE +/- 3993.93, N = 3SE +/- 797.47, N = 31021149.381117321.901099078.401. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 1020487.55 / Avg: 1021149.38 / Max: 1022165.08Min: 1112326.86 / Avg: 1117321.9 / Max: 1125217.72Min: 1097510.37 / Avg: 1099078.4 / Max: 1100115.051. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KSE +/- 1890.11, N = 3SE +/- 1111.75, N = 3SE +/- 6338.63, N = 3971725.251060251.051046604.151. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 968821.98 / Avg: 971725.25 / Max: 975273.5Min: 1058779.63 / Avg: 1060251.05 / Max: 1062430.43Min: 1038661.74 / Avg: 1046604.15 / Max: 1059132.421. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KSE +/- 8046.26, N = 3SE +/- 3987.52, N = 3SE +/- 741.75, N = 3989148.971068662.911074996.431. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 973181.44 / Avg: 989148.97 / Max: 998866.39Min: 1063015.78 / Avg: 1068662.91 / Max: 1076363.3Min: 1073526.71 / Avg: 1074996.43 / Max: 1075905.961. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsLinux 5.10.130Linux 5.15.83Linux 6.1714212835SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 329.5029.5332.02
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsLinux 5.10.130Linux 5.15.83Linux 6.1714212835Min: 29.31 / Avg: 29.49 / Max: 29.69Min: 29.37 / Avg: 29.53 / Max: 29.74Min: 31.89 / Avg: 32.02 / Max: 32.2

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Fill SyncLinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KSE +/- 254.19, N = 3SE +/- 684.45, N = 3SE +/- 160.74, N = 32164642349692184541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Fill SyncLinux 5.10.130Linux 5.15.83Linux 6.140K80K120K160K200KMin: 215963 / Avg: 216464.33 / Max: 216788Min: 233626 / Avg: 234968.67 / Max: 235871Min: 218206 / Avg: 218453.67 / Max: 2187551. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KSE +/- 4444.49, N = 3SE +/- 845.72, N = 3SE +/- 1905.52, N = 3918617.37993337.90979549.431. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 913740.82 / Avg: 918617.37 / Max: 927491.86Min: 991845.66 / Avg: 993337.9 / Max: 994773.69Min: 975742.06 / Avg: 979549.43 / Max: 981597.851. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsLinux 5.10.130Linux 5.15.83Linux 6.1714212835SE +/- 0.29, N = 8SE +/- 0.33, N = 3SE +/- 0.30, N = 928.2128.3530.49
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsLinux 5.10.130Linux 5.15.83Linux 6.1714212835Min: 26.65 / Avg: 28.21 / Max: 28.87Min: 27.7 / Avg: 28.35 / Max: 28.75Min: 28.5 / Avg: 30.49 / Max: 31.01

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsLinux 5.10.130Linux 5.15.83Linux 6.1816243240SE +/- 0.02, N = 3SE +/- 0.20, N = 3SE +/- 0.13, N = 334.1633.9236.32
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsLinux 5.10.130Linux 5.15.83Linux 6.1816243240Min: 34.15 / Avg: 34.16 / Max: 34.19Min: 33.56 / Avg: 33.92 / Max: 34.27Min: 36.08 / Avg: 36.32 / Max: 36.51

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsLinux 5.10.130Linux 5.15.83Linux 6.1714212835SE +/- 0.21, N = 3SE +/- 0.34, N = 6SE +/- 0.30, N = 829.1428.6630.43
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsLinux 5.10.130Linux 5.15.83Linux 6.1714212835Min: 28.73 / Avg: 29.14 / Max: 29.42Min: 26.97 / Avg: 28.66 / Max: 29.09Min: 28.92 / Avg: 30.43 / Max: 31.27

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Sequential FillLinux 5.10.130Linux 5.15.83Linux 6.1300K600K900K1200K1500KSE +/- 10501.31, N = 3SE +/- 3547.16, N = 3SE +/- 11595.14, N = 31204255119383711349911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Sequential FillLinux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 1183344 / Avg: 1204255.33 / Max: 1216405Min: 1186959 / Avg: 1193837.33 / Max: 1198781Min: 1118443 / Avg: 1134991 / Max: 11573351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.11.12072.24143.36214.48285.6035SE +/- 0.01724, N = 3SE +/- 0.01367, N = 3SE +/- 0.02580, N = 34.706994.708344.98067MIN: 4.43MIN: 4.4MIN: 4.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 4.69 / Avg: 4.71 / Max: 4.74Min: 4.68 / Avg: 4.71 / Max: 4.73Min: 4.95 / Avg: 4.98 / Max: 5.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.11122334455SE +/- 0.06, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 348.0547.6645.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.11020304050Min: 47.96 / Avg: 48.05 / Max: 48.16Min: 47.45 / Avg: 47.66 / Max: 47.92Min: 45.2 / Avg: 45.41 / Max: 45.591. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read WriteLinux 5.10.130Linux 5.15.83Linux 6.110K20K30K40K50KSE +/- 132.15, N = 3SE +/- 31.67, N = 3SE +/- 71.67, N = 34767645922452231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read WriteLinux 5.10.130Linux 5.15.83Linux 6.18K16K24K32K40KMin: 47442.63 / Avg: 47676.19 / Max: 47900.09Min: 45865.74 / Avg: 45921.97 / Max: 45975.32Min: 45095.46 / Avg: 45223.41 / Max: 45343.341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.11.24382.48763.73144.97526.219SE +/- 0.015, N = 3SE +/- 0.004, N = 3SE +/- 0.009, N = 35.2445.4445.5281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 5.22 / Avg: 5.24 / Max: 5.27Min: 5.44 / Avg: 5.44 / Max: 5.45Min: 5.51 / Avg: 5.53 / Max: 5.541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024Linux 5.10.130Linux 5.15.83Linux 6.10.44770.89541.34311.79082.2385SE +/- 0.016352, N = 3SE +/- 0.011134, N = 3SE +/- 0.012895, N = 31.8892681.9896801.9288141. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 1.86 / Avg: 1.89 / Max: 1.91Min: 1.97 / Avg: 1.99 / Max: 2.01Min: 1.91 / Avg: 1.93 / Max: 1.951. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Linux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 37.587.547.201. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 7.56 / Avg: 7.58 / Max: 7.6Min: 7.51 / Avg: 7.54 / Max: 7.56Min: 7.16 / Avg: 7.2 / Max: 7.221. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomLinux 5.10.130Linux 5.15.83Linux 6.1130K260K390K520K650KSE +/- 3662.51, N = 3SE +/- 1069.98, N = 3SE +/- 2022.46, N = 36256956166265944851. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomLinux 5.10.130Linux 5.15.83Linux 6.1110K220K330K440K550KMin: 620664 / Avg: 625694.67 / Max: 632821Min: 615089 / Avg: 616626 / Max: 618684Min: 592347 / Avg: 594485.33 / Max: 5985281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Linux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 37.457.407.081. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 7.44 / Avg: 7.45 / Max: 7.46Min: 7.39 / Avg: 7.4 / Max: 7.4Min: 7.06 / Avg: 7.08 / Max: 7.11. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Linux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.307.256.941. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 7.29 / Avg: 7.3 / Max: 7.31Min: 7.24 / Avg: 7.25 / Max: 7.26Min: 6.93 / Avg: 6.94 / Max: 6.951. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.11122334455SE +/- 0.10, N = 3SE +/- 0.20, N = 3SE +/- 0.14, N = 348.3448.3345.971. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.11020304050Min: 48.15 / Avg: 48.34 / Max: 48.47Min: 47.93 / Avg: 48.33 / Max: 48.56Min: 45.71 / Avg: 45.97 / Max: 46.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random FillLinux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KSE +/- 10947.16, N = 3SE +/- 11892.65, N = 3SE +/- 2558.50, N = 39831999828489352851. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random FillLinux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 962372 / Avg: 983199.33 / Max: 999460Min: 970769 / Avg: 982847.67 / Max: 1006632Min: 930199 / Avg: 935285.33 / Max: 9383131. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleLinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KSE +/- 62.87, N = 3SE +/- 51.78, N = 3SE +/- 27.75, N = 315238.115998.615256.11. (CC) gcc options: -std=gnu99 -O3 -lrt -pthread -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleLinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KMin: 15113.9 / Avg: 15238.1 / Max: 15317.2Min: 15916.8 / Avg: 15998.57 / Max: 16094.5Min: 15200.9 / Avg: 15256.07 / Max: 15288.91. (CC) gcc options: -std=gnu99 -O3 -lrt -pthread -lm

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1Linux 5.10.130Linux 5.15.83Linux 6.1918273645SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 338.1737.9036.37
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1Linux 5.10.130Linux 5.15.83Linux 6.1816243240Min: 38.08 / Avg: 38.17 / Max: 38.29Min: 37.84 / Avg: 37.9 / Max: 37.98Min: 36.33 / Avg: 36.37 / Max: 36.39

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.1918273645SE +/- 0.08, N = 3SE +/- 0.19, N = 3SE +/- 0.08, N = 337.8837.8636.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.1816243240Min: 37.77 / Avg: 37.88 / Max: 38.03Min: 37.64 / Avg: 37.86 / Max: 38.25Min: 35.99 / Avg: 36.1 / Max: 36.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read WriteLinux 5.10.130Linux 5.15.83Linux 6.112K24K36K48K60KSE +/- 146.53, N = 3SE +/- 125.21, N = 3SE +/- 90.59, N = 35365656230543651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read WriteLinux 5.10.130Linux 5.15.83Linux 6.110K20K30K40K50KMin: 53450.21 / Avg: 53656.37 / Max: 53939.82Min: 56054.99 / Avg: 56230.02 / Max: 56472.64Min: 54259.68 / Avg: 54365.4 / Max: 54545.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.10.41940.83881.25821.67762.097SE +/- 0.005, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 31.8641.7791.8391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 1.85 / Avg: 1.86 / Max: 1.87Min: 1.77 / Avg: 1.78 / Max: 1.78Min: 1.83 / Avg: 1.84 / Max: 1.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Linux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.707.717.361. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 7.69 / Avg: 7.7 / Max: 7.71Min: 7.7 / Avg: 7.71 / Max: 7.72Min: 7.34 / Avg: 7.36 / Max: 7.371. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfLinux 5.10.130Linux 5.15.83Linux 6.15001000150020002500SE +/- 9.87, N = 3SE +/- 6.33, N = 3SE +/- 7.51, N = 3244624162341
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfLinux 5.10.130Linux 5.15.83Linux 6.1400800120016002000Min: 2430 / Avg: 2446 / Max: 2464Min: 2410 / Avg: 2416.33 / Max: 2429Min: 2328 / Avg: 2340.67 / Max: 2354

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessLinux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.021, N = 3SE +/- 0.017, N = 3SE +/- 0.029, N = 36.6376.6666.9091. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 6.6 / Avg: 6.64 / Max: 6.66Min: 6.64 / Avg: 6.67 / Max: 6.7Min: 6.86 / Avg: 6.91 / Max: 6.961. (CXX) g++ options: -O3 -fPIC -lm

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Linux 5.10.130Linux 5.15.83Linux 6.10.89381.78762.68143.57524.469SE +/- 0.028488, N = 3SE +/- 0.017996, N = 3SE +/- 0.063340, N = 33.9095523.9725913.8172681. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.87 / Avg: 3.91 / Max: 3.97Min: 3.94 / Avg: 3.97 / Max: 4Min: 3.69 / Avg: 3.82 / Max: 3.911. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianLinux 5.10.130Linux 5.15.83Linux 6.170140210280350SE +/- 1.76, N = 33353313221. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianLinux 5.10.130Linux 5.15.83Linux 6.160120180240300Min: 328 / Avg: 331.33 / Max: 3341. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunLinux 5.10.130Linux 5.15.83Linux 6.150100150200250SE +/- 0.88, N = 15SE +/- 1.25, N = 14SE +/- 4.28, N = 3212.60209.58204.43MIN: 22.96 / MAX: 20000MIN: 22.65 / MAX: 20000MIN: 22.33 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 207.09 / Avg: 212.6 / Max: 217.48Min: 200.08 / Avg: 209.58 / Max: 215.12Min: 196.03 / Avg: 204.43 / Max: 210.071. ClickHouse server version 22.5.4.19 (official build).

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceLinux 5.10.130Linux 5.15.83Linux 6.10.53551.0711.60652.1422.6775SE +/- 0.01, N = 3SE +/- 0.00, N = 6SE +/- 0.01, N = 32.382.292.36MIN: 2.2 / MAX: 2.75MIN: 2.23 / MAX: 3.1MIN: 2.3 / MAX: 2.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 2.37 / Avg: 2.38 / Max: 2.39Min: 2.29 / Avg: 2.29 / Max: 2.3Min: 2.35 / Avg: 2.36 / Max: 2.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleLinux 5.10.130Linux 5.15.83Linux 6.12K4K6K8K10KSE +/- 20.30, N = 3SE +/- 12.20, N = 3SE +/- 25.18, N = 310158.910555.710161.31. (CC) gcc options: -std=gnu99 -O3 -lrt -pthread -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleLinux 5.10.130Linux 5.15.83Linux 6.12K4K6K8K10KMin: 10119.4 / Avg: 10158.9 / Max: 10186.8Min: 10534 / Avg: 10555.73 / Max: 10576.2Min: 10126.2 / Avg: 10161.27 / Max: 10210.11. (CC) gcc options: -std=gnu99 -O3 -lrt -pthread -lm

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingLinux 5.10.130Linux 5.15.83Linux 6.1700K1400K2100K2800K3500KSE +/- 49813.03, N = 3SE +/- 29968.88, N = 3SE +/- 14227.22, N = 33100899304327929853921. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingLinux 5.10.130Linux 5.15.83Linux 6.1500K1000K1500K2000K2500KMin: 3034605 / Avg: 3100899.33 / Max: 3198450Min: 3000054 / Avg: 3043278.67 / Max: 3100851Min: 2957201 / Avg: 2985392 / Max: 30028331. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheLinux 5.10.130Linux 5.15.83Linux 6.150100150200250SE +/- 1.55, N = 15SE +/- 1.58, N = 14SE +/- 0.88, N = 3205.76204.26198.51MIN: 22.01 / MAX: 20000MIN: 22.52 / MAX: 20000MIN: 22.14 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 192.71 / Avg: 205.76 / Max: 213.52Min: 190.28 / Avg: 204.26 / Max: 212.27Min: 196.88 / Avg: 198.51 / Max: 199.881. ClickHouse server version 22.5.4.19 (official build).

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigLinux 5.10.130Linux 5.15.83Linux 6.11632486480SE +/- 0.25, N = 3SE +/- 0.24, N = 3SE +/- 0.28, N = 370.1770.4972.56
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigLinux 5.10.130Linux 5.15.83Linux 6.11428425670Min: 69.88 / Avg: 70.17 / Max: 70.66Min: 70.25 / Avg: 70.49 / Max: 70.96Min: 72.28 / Avg: 72.56 / Max: 73.11

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1612182430SE +/- 0.10, N = 3SE +/- 0.29, N = 3SE +/- 0.23, N = 326.0026.4826.87
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1612182430Min: 25.81 / Avg: 26 / Max: 26.15Min: 25.98 / Avg: 26.48 / Max: 26.98Min: 26.53 / Avg: 26.87 / Max: 27.29

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1918273645SE +/- 0.15, N = 3SE +/- 0.41, N = 3SE +/- 0.31, N = 338.4437.7637.21
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1816243240Min: 38.22 / Avg: 38.44 / Max: 38.72Min: 37.05 / Avg: 37.76 / Max: 38.48Min: 36.62 / Avg: 37.21 / Max: 37.68

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Linux 5.10.130Linux 5.15.83Linux 6.10.58031.16061.74092.32122.9015SE +/- 0.035214, N = 3SE +/- 0.013088, N = 3SE +/- 0.031971, N = 32.5378912.5790452.4971641. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 2.49 / Avg: 2.54 / Max: 2.61Min: 2.55 / Avg: 2.58 / Max: 2.6Min: 2.44 / Avg: 2.5 / Max: 2.551. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresLinux 5.10.130Linux 5.15.83Linux 6.1700K1400K2100K2800K3500KSE +/- 159.78, N = 3SE +/- 1860.75, N = 3SE +/- 1806.60, N = 33214994.833314461.383315660.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresLinux 5.10.130Linux 5.15.83Linux 6.1600K1200K1800K2400K3000KMin: 3214727.06 / Avg: 3214994.83 / Max: 3215279.75Min: 3311372.46 / Avg: 3314461.38 / Max: 3317803.36Min: 3312970.55 / Avg: 3315660.88 / Max: 3319094.831. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Linux 5.10.130Linux 5.15.83Linux 6.10.35260.70521.05781.41041.763SE +/- 0.008212, N = 3SE +/- 0.017482, N = 3SE +/- 0.006196, N = 31.5362551.5670491.5200401. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 1.53 / Avg: 1.54 / Max: 1.55Min: 1.54 / Avg: 1.57 / Max: 1.6Min: 1.51 / Avg: 1.52 / Max: 1.531. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50Linux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.099, N = 3SE +/- 0.025, N = 3SE +/- 0.037, N = 910.1489.8509.932MIN: 9.88 / MAX: 23.7MIN: 9.74 / MAX: 31.53MIN: 9.74 / MAX: 23.921. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 9.97 / Avg: 10.15 / Max: 10.31Min: 9.8 / Avg: 9.85 / Max: 9.89Min: 9.82 / Avg: 9.93 / Max: 10.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigLinux 5.10.130Linux 5.15.83Linux 6.12004006008001000SE +/- 0.08, N = 3SE +/- 1.08, N = 3SE +/- 0.78, N = 3907.75911.08934.27
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigLinux 5.10.130Linux 5.15.83Linux 6.1160320480640800Min: 907.59 / Avg: 907.75 / Max: 907.83Min: 909.59 / Avg: 911.08 / Max: 913.18Min: 932.7 / Avg: 934.26 / Max: 935.05

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Linux 5.10.130Linux 5.15.83Linux 6.10.9971.9942.9913.9884.985SE +/- 0.002245, N = 3SE +/- 0.011431, N = 3SE +/- 0.047576, N = 34.3858934.4310984.3113821. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 4.38 / Avg: 4.39 / Max: 4.39Min: 4.41 / Avg: 4.43 / Max: 4.45Min: 4.26 / Avg: 4.31 / Max: 4.411. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Linux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.05, N = 3SE +/- 0.34, N = 6SE +/- 0.13, N = 314.2014.2113.83MIN: 13.99 / MAX: 15.03MIN: 13.45 / MAX: 157.72MIN: 13.51 / MAX: 26.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Linux 5.10.130Linux 5.15.83Linux 6.148121620Min: 14.14 / Avg: 14.2 / Max: 14.3Min: 13.59 / Avg: 14.21 / Max: 15.82Min: 13.68 / Avg: 13.83 / Max: 14.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Linux 5.10.130Linux 5.15.83Linux 6.10.86741.73482.60223.46964.337SE +/- 0.007, N = 3SE +/- 0.020, N = 3SE +/- 0.020, N = 93.8553.7523.831MIN: 3.65 / MAX: 16.53MIN: 3.56 / MAX: 17.6MIN: 3.59 / MAX: 5.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.85 / Avg: 3.86 / Max: 3.87Min: 3.73 / Avg: 3.75 / Max: 3.79Min: 3.77 / Avg: 3.83 / Max: 3.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Linux 5.10.130Linux 5.15.83Linux 6.10.58641.17281.75922.34562.932SE +/- 0.020, N = 3SE +/- 0.015, N = 3SE +/- 0.012, N = 92.5922.5372.606MIN: 2.48 / MAX: 5.22MIN: 2.47 / MAX: 4.14MIN: 2.49 / MAX: 16.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 2.56 / Avg: 2.59 / Max: 2.63Min: 2.52 / Avg: 2.54 / Max: 2.57Min: 2.56 / Avg: 2.61 / Max: 2.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADLinux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 0.84, N = 3SE +/- 0.59, N = 3SE +/- 0.44, N = 3185.33182.66180.52
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 183.85 / Avg: 185.33 / Max: 186.74Min: 181.47 / Avg: 182.66 / Max: 183.36Min: 179.67 / Avg: 180.52 / Max: 181.14

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingLinux 5.10.130Linux 5.15.83Linux 6.130060090012001500SE +/- 7.42, N = 3SE +/- 2.73, N = 3SE +/- 2.91, N = 31333132112991. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingLinux 5.10.130Linux 5.15.83Linux 6.12004006008001000Min: 1324 / Avg: 1333.33 / Max: 1348Min: 1317 / Avg: 1320.67 / Max: 1326Min: 1294 / Avg: 1299.33 / Max: 13041. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.11632486480SE +/- 0.65, N = 3SE +/- 0.33, N = 3SE +/- 0.51, N = 373.6072.0671.89
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.11428425670Min: 72.54 / Avg: 73.6 / Max: 74.79Min: 71.4 / Avg: 72.06 / Max: 72.49Min: 70.87 / Avg: 71.89 / Max: 72.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.10, N = 313.5813.8713.90
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 13.36 / Avg: 13.58 / Max: 13.77Min: 13.78 / Avg: 13.87 / Max: 13.99Min: 13.79 / Avg: 13.9 / Max: 14.1

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.08, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 391.5691.7593.74
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 91.48 / Avg: 91.56 / Max: 91.71Min: 91.42 / Avg: 91.75 / Max: 91.99Min: 93.58 / Avg: 93.74 / Max: 93.85

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.19160.38320.57480.76640.958SE +/- 0.006451, N = 3SE +/- 0.010105, N = 3SE +/- 0.003867, N = 30.8514550.8465430.831866MIN: 0.83MIN: 0.82MIN: 0.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.84 / Avg: 0.85 / Max: 0.86Min: 0.83 / Avg: 0.85 / Max: 0.86Min: 0.82 / Avg: 0.83 / Max: 0.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.30, N = 3SE +/- 0.74, N = 3SE +/- 0.43, N = 3144.01146.95147.38
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 143.41 / Avg: 144.01 / Max: 144.35Min: 146.03 / Avg: 146.95 / Max: 148.41Min: 146.84 / Avg: 147.38 / Max: 148.23

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.11224364860SE +/- 0.12, N = 3SE +/- 0.27, N = 3SE +/- 0.16, N = 355.5454.4354.27
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.11122334455Min: 55.41 / Avg: 55.54 / Max: 55.77Min: 53.89 / Avg: 54.43 / Max: 54.77Min: 53.96 / Avg: 54.27 / Max: 54.47

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KSE +/- 96.91, N = 3SE +/- 75.54, N = 3SE +/- 964.72, N = 67801677997798041. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.114K28K42K56K70KMin: 77867 / Avg: 78016.33 / Max: 78198Min: 77885 / Avg: 77997.33 / Max: 78141Min: 77879 / Avg: 79803.83 / Max: 829771. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyLinux 5.10.130Linux 5.15.83Linux 6.1612182430SE +/- 0.07, N = 3SE +/- 0.39, N = 6SE +/- 0.09, N = 323.6023.8923.35MIN: 23.16 / MAX: 27.3MIN: 22.91 / MAX: 31.78MIN: 23.01 / MAX: 26.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyLinux 5.10.130Linux 5.15.83Linux 6.1612182430Min: 23.46 / Avg: 23.6 / Max: 23.68Min: 23.13 / Avg: 23.89 / Max: 25.73Min: 23.18 / Avg: 23.35 / Max: 23.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlLinux 5.10.130Linux 5.15.83Linux 6.1110220330440550SE +/- 2.40, N = 3SE +/- 1.53, N = 3SE +/- 1.53, N = 35024994911. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlLinux 5.10.130Linux 5.15.83Linux 6.190180270360450Min: 499 / Avg: 502.33 / Max: 507Min: 497 / Avg: 499 / Max: 502Min: 489 / Avg: 491 / Max: 4941. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.21, N = 3SE +/- 0.35, N = 3SE +/- 0.17, N = 397.2597.5799.42
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 96.94 / Avg: 97.25 / Max: 97.64Min: 96.87 / Avg: 97.57 / Max: 97.98Min: 99.12 / Avg: 99.42 / Max: 99.7

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdLinux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.12, N = 3SE +/- 0.12, N = 6SE +/- 0.14, N = 317.1316.8516.76MIN: 16.75 / MAX: 17.86MIN: 16.41 / MAX: 19.23MIN: 16.41 / MAX: 17.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdLinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 16.92 / Avg: 17.13 / Max: 17.34Min: 16.55 / Avg: 16.85 / Max: 17.27Min: 16.57 / Avg: 16.76 / Max: 17.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMLinux 5.10.130Linux 5.15.83Linux 6.170140210280350SE +/- 1.79, N = 3SE +/- 0.22, N = 3SE +/- 0.35, N = 3303.4310.0307.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMLinux 5.10.130Linux 5.15.83Linux 6.160120180240300Min: 300.2 / Avg: 303.43 / Max: 306.4Min: 309.6 / Avg: 310.03 / Max: 310.3Min: 306.8 / Avg: 307.4 / Max: 3081. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pLinux 5.10.130Linux 5.15.83Linux 6.190180270360450SE +/- 0.88, N = 3SE +/- 0.85, N = 3SE +/- 2.62, N = 3415.14418.02409.131. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pLinux 5.10.130Linux 5.15.83Linux 6.170140210280350Min: 413.4 / Avg: 415.14 / Max: 416.24Min: 416.73 / Avg: 418.02 / Max: 419.62Min: 403.96 / Avg: 409.13 / Max: 412.511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Linux 5.10.130Linux 5.15.83Linux 6.10.68931.37862.06792.75723.4465SE +/- 0.016716, N = 3SE +/- 0.004654, N = 3SE +/- 0.021684, N = 32.9996113.0635113.0125851. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 2.97 / Avg: 3 / Max: 3.03Min: 3.05 / Avg: 3.06 / Max: 3.07Min: 2.97 / Avg: 3.01 / Max: 3.041. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1612182430SE +/- 0.14, N = 3SE +/- 0.27, N = 7SE +/- 0.11, N = 322.8423.3123.07
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1510152025Min: 22.68 / Avg: 22.84 / Max: 23.11Min: 22.7 / Avg: 23.31 / Max: 24.77Min: 22.94 / Avg: 23.07 / Max: 23.29

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.11020304050SE +/- 0.26, N = 3SE +/- 0.47, N = 7SE +/- 0.21, N = 343.7642.9043.32
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1918273645Min: 43.24 / Avg: 43.76 / Max: 44.07Min: 40.35 / Avg: 42.9 / Max: 44.03Min: 42.9 / Avg: 43.32 / Max: 43.56

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.10.11480.22960.34440.45920.574SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.500.500.51MIN: 0.31 / MAX: 16.57MIN: 0.31 / MAX: 28.92MIN: 0.32 / MAX: 15.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.5 / Avg: 0.5 / Max: 0.51Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.51 / Avg: 0.51 / Max: 0.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1714212835SE +/- 0.12, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 329.3729.2929.86MIN: 11.56 / MAX: 50.23MIN: 13.39 / MAX: 49.34MIN: 11.2 / MAX: 55.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1714212835Min: 29.14 / Avg: 29.37 / Max: 29.52Min: 29.28 / Avg: 29.29 / Max: 29.31Min: 29.68 / Avg: 29.86 / Max: 30.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.160120180240300SE +/- 1.08, N = 3SE +/- 0.06, N = 3SE +/- 1.20, N = 3272.09272.80267.621. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.150100150200250Min: 270.66 / Avg: 272.09 / Max: 274.22Min: 272.69 / Avg: 272.8 / Max: 272.9Min: 265.26 / Avg: 267.62 / Max: 269.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Linux 5.10.130Linux 5.15.83Linux 6.10.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.530.521. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.521. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Linux 5.10.130Linux 5.15.83Linux 6.10.85281.70562.55843.41124.264SE +/- 0.019070, N = 3SE +/- 0.052597, N = 3SE +/- 0.044488, N = 33.7269333.7903213.7188121. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.69 / Avg: 3.73 / Max: 3.76Min: 3.69 / Avg: 3.79 / Max: 3.85Min: 3.64 / Avg: 3.72 / Max: 3.81. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Linux 5.10.130Linux 5.15.83Linux 6.10.12150.2430.36450.4860.6075SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.540.540.531. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.53 / Avg: 0.53 / Max: 0.531. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyLinux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.27, N = 3SE +/- 0.09, N = 3SE +/- 0.19, N = 315.8015.8115.53
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyLinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 15.45 / Avg: 15.8 / Max: 16.33Min: 15.67 / Avg: 15.81 / Max: 15.97Min: 15.27 / Avg: 15.53 / Max: 15.91

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.69, N = 3SE +/- 0.32, N = 3SE +/- 0.62, N = 3120.79122.98121.601. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 119.54 / Avg: 120.79 / Max: 121.9Min: 122.61 / Avg: 122.98 / Max: 123.61Min: 120.69 / Avg: 121.6 / Max: 122.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetLinux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.03, N = 3SE +/- 0.18, N = 6SE +/- 0.01, N = 314.5414.7114.45MIN: 14.37 / MAX: 17.06MIN: 14.26 / MAX: 15.75MIN: 14.31 / MAX: 15.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetLinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 14.51 / Avg: 14.54 / Max: 14.59Min: 14.38 / Avg: 14.71 / Max: 15.52Min: 14.43 / Avg: 14.45 / Max: 14.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 311.3811.4311.581. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 11.3 / Avg: 11.38 / Max: 11.45Min: 11.41 / Avg: 11.43 / Max: 11.44Min: 11.54 / Avg: 11.58 / Max: 11.621. (CXX) g++ options: -O3 -fPIC -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.17K14K21K28K35KSE +/- 85.59, N = 3SE +/- 38.88, N = 3SE +/- 16.23, N = 331167.8131007.3830632.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.15K10K15K20K25KMin: 30999.89 / Avg: 31167.81 / Max: 31280.51Min: 30957.21 / Avg: 31007.38 / Max: 31083.92Min: 30600.27 / Avg: 30632.46 / Max: 30652.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingLinux 5.10.130Linux 5.15.83Linux 6.19001800270036004500SE +/- 31.35, N = 3SE +/- 42.48, N = 3SE +/- 29.72, N = 34201.024129.344181.801. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingLinux 5.10.130Linux 5.15.83Linux 6.17001400210028003500Min: 4153.53 / Avg: 4201.02 / Max: 4260.22Min: 4067.88 / Avg: 4129.34 / Max: 4210.87Min: 4126.97 / Avg: 4181.8 / Max: 4229.081. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Linux 5.10.130Linux 5.15.83Linux 6.10.9611.9222.8833.8444.805SE +/- 0.030220, N = 3SE +/- 0.015411, N = 3SE +/- 0.017869, N = 34.1995684.2708974.2088801. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 4.14 / Avg: 4.2 / Max: 4.25Min: 4.24 / Avg: 4.27 / Max: 4.29Min: 4.18 / Avg: 4.21 / Max: 4.241. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultLinux 5.10.130Linux 5.15.83Linux 6.151015202519.4519.4919.78

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Linux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.059, N = 3SE +/- 0.054, N = 3SE +/- 0.036, N = 36.6896.7496.7981. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 6.61 / Avg: 6.69 / Max: 6.8Min: 6.66 / Avg: 6.75 / Max: 6.84Min: 6.73 / Avg: 6.8 / Max: 6.851. (CXX) g++ options: -O3 -fPIC -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetLinux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.12, N = 3SE +/- 0.10, N = 3SE +/- 0.15, N = 914.9114.8815.12MIN: 14.46 / MAX: 28.47MIN: 14.51 / MAX: 33.86MIN: 14.55 / MAX: 29.561. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetLinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 14.75 / Avg: 14.91 / Max: 15.15Min: 14.78 / Avg: 14.88 / Max: 15.08Min: 14.82 / Avg: 15.12 / Max: 16.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KSE +/- 95.64, N = 3SE +/- 364.72, N = 3SE +/- 526.57, N = 31063571080581069681. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KMin: 106196 / Avg: 106357.33 / Max: 106527Min: 107640 / Avg: 108058.33 / Max: 108785Min: 106061 / Avg: 106968 / Max: 1078851. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetLinux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.03, N = 3SE +/- 0.02, N = 6SE +/- 0.02, N = 37.026.916.96MIN: 6.87 / MAX: 7.89MIN: 6.78 / MAX: 7.67MIN: 6.86 / MAX: 7.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 6.96 / Avg: 7.02 / Max: 7.06Min: 6.86 / Avg: 6.91 / Max: 7Min: 6.94 / Avg: 6.96 / Max: 71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Linux 5.10.130Linux 5.15.83Linux 6.11.1182.2363.3544.4725.59SE +/- 0.043, N = 3SE +/- 0.009, N = 3SE +/- 0.019, N = 94.8924.8974.969MIN: 4.79 / MAX: 17.74MIN: 4.84 / MAX: 18.25MIN: 4.87 / MAX: 7.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 4.85 / Avg: 4.89 / Max: 4.98Min: 4.89 / Avg: 4.9 / Max: 4.92Min: 4.92 / Avg: 4.97 / Max: 5.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestLinux 5.10.130Linux 5.15.83Linux 6.130M60M90M120M150MSE +/- 251661.15, N = 3SE +/- 352766.84, N = 3SE +/- 417665.47, N = 31220000001226333331207333331. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestLinux 5.10.130Linux 5.15.83Linux 6.120M40M60M80M100MMin: 121700000 / Avg: 122000000 / Max: 122500000Min: 122100000 / Avg: 122633333.33 / Max: 123300000Min: 119900000 / Avg: 120733333.33 / Max: 1212000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Linux 5.10.130Linux 5.15.83Linux 6.1510152025SE +/- 0.36, N = 3SE +/- 0.38, N = 3SE +/- 0.32, N = 922.7322.4422.79MIN: 22.05 / MAX: 36.04MIN: 21.58 / MAX: 36.88MIN: 21.59 / MAX: 102.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Linux 5.10.130Linux 5.15.83Linux 6.1510152025Min: 22.24 / Avg: 22.73 / Max: 23.43Min: 21.8 / Avg: 22.44 / Max: 23.11Min: 21.82 / Avg: 22.79 / Max: 24.61. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.130K60K90K120K150KSE +/- 68.75, N = 3SE +/- 104.41, N = 3SE +/- 1877.65, N = 31244281245561263411. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KMin: 124322 / Avg: 124428.33 / Max: 124557Min: 124407 / Avg: 124555.67 / Max: 124757Min: 124343 / Avg: 126341.33 / Max: 1300941. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.12, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 3114.41115.15116.16
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 114.17 / Avg: 114.41 / Max: 114.57Min: 115.01 / Avg: 115.15 / Max: 115.35Min: 115.83 / Avg: 116.16 / Max: 116.34

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.115K30K45K60K75KSE +/- 35.09, N = 3SE +/- 44.92, N = 3SE +/- 539.86, N = 146682366817678371. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.112K24K36K48K60KMin: 66774 / Avg: 66823 / Max: 66891Min: 66752 / Avg: 66816.67 / Max: 66903Min: 66735 / Avg: 67836.86 / Max: 735501. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 310.5410.4310.38
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 10.44 / Avg: 10.54 / Max: 10.6Min: 10.35 / Avg: 10.43 / Max: 10.49Min: 10.31 / Avg: 10.38 / Max: 10.44

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Linux 5.10.130Linux 5.15.83Linux 6.10.60261.20521.80782.41043.013SE +/- 0.044, N = 3SE +/- 0.047, N = 3SE +/- 0.028, N = 92.6782.6392.640MIN: 2.4 / MAX: 5.22MIN: 2.38 / MAX: 3.73MIN: 2.36 / MAX: 16.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 2.59 / Avg: 2.68 / Max: 2.73Min: 2.57 / Avg: 2.64 / Max: 2.73Min: 2.57 / Avg: 2.64 / Max: 2.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlLinux 5.10.130Linux 5.15.83Linux 6.14K8K12K16K20KSE +/- 286.72, N = 3SE +/- 185.02, N = 3SE +/- 135.32, N = 31736317610174771. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlLinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KMin: 16790 / Avg: 17363.33 / Max: 17660Min: 17420 / Avg: 17610 / Max: 17980Min: 17210 / Avg: 17476.67 / Max: 176501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileLinux 5.10.130Linux 5.15.83Linux 6.1100200300400500SE +/- 0.23, N = 3SE +/- 0.27, N = 3SE +/- 0.24, N = 3453.61453.31459.75
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileLinux 5.10.130Linux 5.15.83Linux 6.180160240320400Min: 453.16 / Avg: 453.61 / Max: 453.91Min: 452.88 / Avg: 453.31 / Max: 453.8Min: 459.47 / Avg: 459.75 / Max: 460.24

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1160320480640800SE +/- 2.25, N = 3SE +/- 1.90, N = 3SE +/- 2.74, N = 3752.83760.84763.31
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1130260390520650Min: 749.89 / Avg: 752.83 / Max: 757.25Min: 757.06 / Avg: 760.84 / Max: 763.15Min: 758.18 / Avg: 763.31 / Max: 767.55

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.11428425670SE +/- 0.70, N = 3SE +/- 0.24, N = 3SE +/- 0.40, N = 364.8264.2863.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.11326395265Min: 63.43 / Avg: 64.82 / Max: 65.67Min: 63.89 / Avg: 64.28 / Max: 64.73Min: 63.16 / Avg: 63.96 / Max: 64.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 1.34, N = 3SE +/- 0.46, N = 3SE +/- 0.78, N = 3123.34124.34124.97MIN: 64 / MAX: 136.84MIN: 64.41 / MAX: 139.86MIN: 64.13 / MAX: 142.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 121.68 / Avg: 123.34 / Max: 125.99Min: 123.49 / Avg: 124.34 / Max: 125.09Min: 124.08 / Avg: 124.97 / Max: 126.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACLinux 5.10.130Linux 5.15.83Linux 6.1510152025SE +/- 0.03, N = 5SE +/- 0.02, N = 5SE +/- 0.04, N = 521.1121.1021.381. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACLinux 5.10.130Linux 5.15.83Linux 6.1510152025Min: 21.06 / Avg: 21.11 / Max: 21.2Min: 21.01 / Avg: 21.1 / Max: 21.17Min: 21.23 / Avg: 21.38 / Max: 21.431. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointLinux 5.10.130Linux 5.15.83Linux 6.11020304050SE +/- 0.18, N = 3SE +/- 0.21, N = 3SE +/- 0.20, N = 341.3941.4341.94
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointLinux 5.10.130Linux 5.15.83Linux 6.1918273645Min: 41.04 / Avg: 41.39 / Max: 41.59Min: 41.01 / Avg: 41.43 / Max: 41.69Min: 41.59 / Avg: 41.94 / Max: 42.27

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pLinux 5.10.130Linux 5.15.83Linux 6.190180270360450SE +/- 0.82, N = 3SE +/- 1.29, N = 3SE +/- 1.64, N = 3423.65425.26419.751. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pLinux 5.10.130Linux 5.15.83Linux 6.180160240320400Min: 422.07 / Avg: 423.65 / Max: 424.81Min: 423.03 / Avg: 425.26 / Max: 427.51Min: 416.63 / Avg: 419.75 / Max: 422.211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedLinux 5.10.130Linux 5.15.83Linux 6.170140210280350SE +/- 0.67, N = 3SE +/- 0.67, N = 3SE +/- 0.67, N = 33153143111. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedLinux 5.10.130Linux 5.15.83Linux 6.160120180240300Min: 314 / Avg: 314.67 / Max: 316Min: 313 / Avg: 313.67 / Max: 315Min: 310 / Avg: 310.67 / Max: 3121. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.12004006008001000SE +/- 5.71, N = 3SE +/- 1.74, N = 3SE +/- 0.69, N = 3969.33957.08957.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.12004006008001000Min: 963.18 / Avg: 969.33 / Max: 980.74Min: 954.59 / Avg: 957.08 / Max: 960.44Min: 956.43 / Avg: 957.41 / Max: 958.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.0949, N = 3SE +/- 0.0304, N = 3SE +/- 0.1212, N = 37.52657.44087.5359
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 7.36 / Avg: 7.53 / Max: 7.69Min: 7.39 / Avg: 7.44 / Max: 7.49Min: 7.29 / Avg: 7.54 / Max: 7.66

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.130K60K90K120K150KSE +/- 26.43, N = 3SE +/- 135.52, N = 3SE +/- 1530.19, N = 51214671214861230001. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KMin: 121434 / Avg: 121466.67 / Max: 121519Min: 121239 / Avg: 121486.33 / Max: 121706Min: 121363 / Avg: 123000.4 / Max: 1291191. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileLinux 5.10.130Linux 5.15.83Linux 6.11530456075SE +/- 0.32, N = 3SE +/- 0.24, N = 3SE +/- 0.39, N = 365.3365.1465.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileLinux 5.10.130Linux 5.15.83Linux 6.11326395265Min: 64.96 / Avg: 65.33 / Max: 65.97Min: 64.8 / Avg: 65.14 / Max: 65.61Min: 65.42 / Avg: 65.96 / Max: 66.72

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Linux 5.10.130Linux 5.15.83Linux 6.11.27352.5473.82055.0946.3675SE +/- 0.02, N = 3SE +/- 0.01, N = 6SE +/- 0.01, N = 35.665.595.63MIN: 5.54 / MAX: 17.07MIN: 5.48 / MAX: 6.6MIN: 5.53 / MAX: 6.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 5.63 / Avg: 5.66 / Max: 5.68Min: 5.55 / Avg: 5.59 / Max: 5.6Min: 5.61 / Avg: 5.63 / Max: 5.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1110220330440550SE +/- 4.00, N = 13SE +/- 4.62, N = 3SE +/- 4.29, N = 11485.36485.19491.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.190180270360450Min: 456.91 / Avg: 485.36 / Max: 503.34Min: 476.11 / Avg: 485.19 / Max: 491.18Min: 460.6 / Avg: 491.24 / Max: 508.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetLinux 5.10.130Linux 5.15.83Linux 6.11.10482.20963.31444.41925.524SE +/- 0.03, N = 3SE +/- 0.00, N = 6SE +/- 0.01, N = 34.914.854.89MIN: 4.74 / MAX: 16.51MIN: 4.72 / MAX: 7.62MIN: 4.81 / MAX: 5.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 4.86 / Avg: 4.91 / Max: 4.95Min: 4.84 / Avg: 4.85 / Max: 4.86Min: 4.88 / Avg: 4.89 / Max: 4.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.14, N = 13SE +/- 0.16, N = 3SE +/- 0.15, N = 1116.4716.4616.27MIN: 7.86 / MAX: 36.14MIN: 9.28 / MAX: 33.36MIN: 7.67 / MAX: 36.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 15.86 / Avg: 16.47 / Max: 17.48Min: 16.26 / Avg: 16.46 / Max: 16.77Min: 15.69 / Avg: 16.27 / Max: 17.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 1.67, N = 3SE +/- 0.54, N = 3SE +/- 2.16, N = 3132.65134.13132.51
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 129.77 / Avg: 132.65 / Max: 135.55Min: 133.18 / Avg: 134.13 / Max: 135.05Min: 130.25 / Avg: 132.51 / Max: 136.83

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunLinux 5.10.130Linux 5.15.83Linux 6.150100150200250SE +/- 1.01, N = 15SE +/- 0.78, N = 14SE +/- 1.08, N = 3212.42215.02212.92MIN: 22.81 / MAX: 20000MIN: 22.68 / MAX: 20000MIN: 22.26 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 206.34 / Avg: 212.42 / Max: 217.01Min: 207.03 / Avg: 215.02 / Max: 217.95Min: 210.79 / Avg: 212.92 / Max: 214.341. ClickHouse server version 22.5.4.19 (official build).

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.248.348.33MIN: 4.42 / MAX: 23.84MIN: 4.49 / MAX: 23.48MIN: 4.48 / MAX: 25.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 8.14 / Avg: 8.24 / Max: 8.29Min: 8.31 / Avg: 8.34 / Max: 8.36Min: 8.32 / Avg: 8.33 / Max: 8.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.14, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 3121.72122.18123.19
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 121.51 / Avg: 121.72 / Max: 121.98Min: 121.94 / Avg: 122.18 / Max: 122.31Min: 123.06 / Avg: 123.19 / Max: 123.42

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Linux 5.10.130Linux 5.15.83Linux 6.1120240360480600SE +/- 0.13, N = 3SE +/- 0.39, N = 3SE +/- 0.30, N = 3530.27529.24535.571. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Linux 5.10.130Linux 5.15.83Linux 6.190180270360450Min: 530.12 / Avg: 530.27 / Max: 530.52Min: 528.82 / Avg: 529.24 / Max: 530.02Min: 534.96 / Avg: 535.57 / Max: 535.881. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Linux 5.10.130Linux 5.15.83Linux 6.1510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 321.2121.1721.421. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Linux 5.10.130Linux 5.15.83Linux 6.1510152025Min: 21.21 / Avg: 21.21 / Max: 21.22Min: 21.15 / Avg: 21.17 / Max: 21.2Min: 21.4 / Avg: 21.42 / Max: 21.441. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinLinux 5.10.130Linux 5.15.83Linux 6.16001200180024003000SE +/- 27.91, N = 3SE +/- 0.80, N = 3SE +/- 1.62, N = 32833.092812.342800.621. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinLinux 5.10.130Linux 5.15.83Linux 6.15001000150020002500Min: 2802.19 / Avg: 2833.09 / Max: 2888.79Min: 2810.73 / Avg: 2812.34 / Max: 2813.19Min: 2797.4 / Avg: 2800.62 / Max: 2802.431. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SLinux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KSE +/- 6874.21, N = 3SE +/- 10686.26, N = 3SE +/- 3289.87, N = 31149103115058311375501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SLinux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 1136420 / Avg: 1149103.33 / Max: 1160040Min: 1135450 / Avg: 1150583.33 / Max: 1171220Min: 1131220 / Avg: 1137550 / Max: 11422701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Linux 5.10.130Linux 5.15.83Linux 6.1510152025SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 321.3921.3221.551. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Linux 5.10.130Linux 5.15.83Linux 6.1510152025Min: 21.38 / Avg: 21.39 / Max: 21.4Min: 21.32 / Avg: 21.32 / Max: 21.32Min: 21.54 / Avg: 21.55 / Max: 21.561. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Linux 5.10.130Linux 5.15.83Linux 6.1120240360480600SE +/- 0.18, N = 3SE +/- 0.02, N = 3SE +/- 0.18, N = 3534.74532.92538.791. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Linux 5.10.130Linux 5.15.83Linux 6.1100200300400500Min: 534.41 / Avg: 534.74 / Max: 535.02Min: 532.88 / Avg: 532.92 / Max: 532.95Min: 538.45 / Avg: 538.79 / Max: 539.041. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Linux 5.10.130Linux 5.15.83Linux 6.1918273645SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 337.6838.0938.01
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Linux 5.10.130Linux 5.15.83Linux 6.1816243240Min: 37.63 / Avg: 37.68 / Max: 37.74Min: 37.99 / Avg: 38.09 / Max: 38.22Min: 37.96 / Avg: 38.01 / Max: 38.03

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.94581.89162.83743.78324.729SE +/- 0.01257, N = 3SE +/- 0.00929, N = 3SE +/- 0.00677, N = 34.158694.203434.18096MIN: 4.09MIN: 4.13MIN: 4.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 4.14 / Avg: 4.16 / Max: 4.18Min: 4.19 / Avg: 4.2 / Max: 4.22Min: 4.17 / Avg: 4.18 / Max: 4.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 310.5210.4510.41
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 10.49 / Avg: 10.52 / Max: 10.55Min: 10.39 / Avg: 10.45 / Max: 10.57Min: 10.29 / Avg: 10.41 / Max: 10.53

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.160120180240300SE +/- 0.15, N = 3SE +/- 0.17, N = 3SE +/- 0.27, N = 3279.8282.7280.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.150100150200250Min: 279.5 / Avg: 279.77 / Max: 280Min: 282.4 / Avg: 282.7 / Max: 283Min: 279.7 / Avg: 280.23 / Max: 280.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 311.7011.7711.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 11.67 / Avg: 11.7 / Max: 11.73Min: 11.74 / Avg: 11.77 / Max: 11.81Min: 11.61 / Avg: 11.65 / Max: 11.711. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.27, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3117.3118.3117.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 116.9 / Avg: 117.27 / Max: 117.8Min: 118.2 / Avg: 118.27 / Max: 118.4Min: 117 / Avg: 117.07 / Max: 117.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenLinux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 1.33, N = 3SE +/- 1.20, N = 3SE +/- 1.33, N = 31991991971. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 198 / Avg: 199.33 / Max: 202Min: 197 / Avg: 198.67 / Max: 201Min: 196 / Avg: 197.33 / Max: 2001. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeLinux 5.10.130Linux 5.15.83Linux 6.14080120160200188.28186.40186.931. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Linux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.02, N = 3SE +/- 0.01, N = 6SE +/- 0.01, N = 37.066.997.04MIN: 6.92 / MAX: 7.42MIN: 6.89 / MAX: 8.15MIN: 6.92 / MAX: 7.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 7.03 / Avg: 7.06 / Max: 7.08Min: 6.97 / Avg: 6.99 / Max: 7.02Min: 7.03 / Avg: 7.04 / Max: 7.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.114K28K42K56K70KSE +/- 37.27, N = 3SE +/- 20.81, N = 3SE +/- 527.74, N = 36527165242658941. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.111K22K33K44K55KMin: 65205 / Avg: 65271 / Max: 65334Min: 65207 / Avg: 65242 / Max: 65279Min: 65339 / Avg: 65894 / Max: 669491. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.15, N = 3131.0131.6130.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 130.8 / Avg: 130.97 / Max: 131.1Min: 131.5 / Avg: 131.6 / Max: 131.7Min: 130.1 / Avg: 130.33 / Max: 130.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.10.2340.4680.7020.9361.17SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.041.031.04MIN: 0.61 / MAX: 17.55MIN: 0.61 / MAX: 17.94MIN: 0.61 / MAX: 17.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 1.02 / Avg: 1.04 / Max: 1.05Min: 1.02 / Avg: 1.03 / Max: 1.04Min: 1.03 / Avg: 1.04 / Max: 1.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Linux 5.10.130Linux 5.15.83Linux 6.11632486480SE +/- 0.04, N = 3SE +/- 0.41, N = 3SE +/- 0.25, N = 370.5271.0771.201. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Linux 5.10.130Linux 5.15.83Linux 6.11428425670Min: 70.48 / Avg: 70.52 / Max: 70.59Min: 70.28 / Avg: 71.07 / Max: 71.68Min: 70.7 / Avg: 71.2 / Max: 71.481. (CXX) g++ options: -O3 -fPIC -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Linux 5.10.130Linux 5.15.83Linux 6.10.47050.9411.41151.8822.3525SE +/- 0.010, N = 3SE +/- 0.009, N = 3SE +/- 0.009, N = 92.0912.0712.083MIN: 1.99 / MAX: 3.26MIN: 1.99 / MAX: 4.73MIN: 1.98 / MAX: 6.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 2.07 / Avg: 2.09 / Max: 2.11Min: 2.05 / Avg: 2.07 / Max: 2.08Min: 2.06 / Avg: 2.08 / Max: 2.141. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.05, N = 3SE +/- 0.02, N = 6SE +/- 0.02, N = 311.5711.4611.48MIN: 11.39 / MAX: 11.86MIN: 11.27 / MAX: 12.43MIN: 11.33 / MAX: 12.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 11.51 / Avg: 11.57 / Max: 11.68Min: 11.4 / Avg: 11.46 / Max: 11.51Min: 11.46 / Avg: 11.48 / Max: 11.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.10.55871.11741.67612.23482.7935SE +/- 0.003, N = 3SE +/- 0.005, N = 3SE +/- 0.005, N = 32.4752.4832.4601. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 2.47 / Avg: 2.47 / Max: 2.48Min: 2.47 / Avg: 2.48 / Max: 2.49Min: 2.45 / Avg: 2.46 / Max: 2.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.18, N = 3SE +/- 0.14, N = 3SE +/- 0.17, N = 3152.41151.82153.23
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 152.18 / Avg: 152.41 / Max: 152.77Min: 151.53 / Avg: 151.82 / Max: 151.99Min: 153 / Avg: 153.23 / Max: 153.57

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MLinux 5.10.130Linux 5.15.83Linux 6.115003000450060007500SE +/- 3.14, N = 3SE +/- 23.74, N = 3SE +/- 54.19, N = 36794.56829.86767.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MLinux 5.10.130Linux 5.15.83Linux 6.112002400360048006000Min: 6790.1 / Avg: 6794.53 / Max: 6800.6Min: 6788 / Avg: 6829.83 / Max: 6870.2Min: 6659.1 / Avg: 6767.27 / Max: 6827.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.10.74481.48962.23442.97923.724SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 33.313.303.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.29 / Avg: 3.31 / Max: 3.34Min: 3.28 / Avg: 3.3 / Max: 3.33Min: 3.27 / Avg: 3.28 / Max: 3.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricLinux 5.10.130Linux 5.15.83Linux 6.140K80K120K160K200K1677711668521683341. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.46, N = 3SE +/- 0.35, N = 3SE +/- 0.67, N = 3114.81115.53115.83
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 114 / Avg: 114.81 / Max: 115.6Min: 114.95 / Avg: 115.53 / Max: 116.18Min: 114.63 / Avg: 115.83 / Max: 116.93

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1160320480640800SE +/- 1.03, N = 3SE +/- 2.20, N = 3SE +/- 4.45, N = 3758.03759.35764.74
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1130260390520650Min: 755.97 / Avg: 758.03 / Max: 759.11Min: 755.93 / Avg: 759.35 / Max: 763.46Min: 757.79 / Avg: 764.74 / Max: 773.03

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.07110.14220.21330.28440.3555SE +/- 0.000694, N = 3SE +/- 0.001833, N = 3SE +/- 0.000082, N = 30.3132380.3160050.313975MIN: 0.3MIN: 0.3MIN: 0.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.112345Min: 0.31 / Avg: 0.31 / Max: 0.31Min: 0.31 / Avg: 0.32 / Max: 0.32Min: 0.31 / Avg: 0.31 / Max: 0.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pLinux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.057, N = 3SE +/- 0.016, N = 3SE +/- 0.044, N = 37.0466.9857.0241. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 6.96 / Avg: 7.05 / Max: 7.15Min: 6.97 / Avg: 6.99 / Max: 7.02Min: 6.96 / Avg: 7.02 / Max: 7.111. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.13, N = 3SE +/- 0.18, N = 3SE +/- 0.15, N = 3126.9128.0127.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 126.6 / Avg: 126.87 / Max: 127Min: 127.7 / Avg: 128.03 / Max: 128.3Min: 126.9 / Avg: 127.2 / Max: 127.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.1816243240SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.20, N = 334.5234.8234.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.1714212835Min: 34.4 / Avg: 34.52 / Max: 34.64Min: 34.53 / Avg: 34.82 / Max: 35.06Min: 34.39 / Avg: 34.72 / Max: 35.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mLinux 5.10.130Linux 5.15.83Linux 6.1510152025SE +/- 0.15, N = 3SE +/- 0.02, N = 6SE +/- 0.03, N = 318.8318.6718.70MIN: 18.43 / MAX: 58.94MIN: 18.39 / MAX: 21.55MIN: 18.43 / MAX: 19.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mLinux 5.10.130Linux 5.15.83Linux 6.1510152025Min: 18.67 / Avg: 18.83 / Max: 19.13Min: 18.63 / Avg: 18.67 / Max: 18.74Min: 18.64 / Avg: 18.7 / Max: 18.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicLinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KSE +/- 1447.49, N = 3SE +/- 1620.65, N = 3SE +/- 3644.44, N = 3223731.70225144.33225639.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicLinux 5.10.130Linux 5.15.83Linux 6.140K80K120K160K200KMin: 221982.34 / Avg: 223731.7 / Max: 226604.01Min: 222524.49 / Avg: 225144.33 / Max: 228107.08Min: 219557.03 / Avg: 225639.32 / Max: 232158.981. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KSE +/- 656.84, N = 3SE +/- 167.46, N = 3SE +/- 615.39, N = 39682096663960101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KMin: 95640 / Avg: 96820 / Max: 97910Min: 96370 / Avg: 96663.33 / Max: 96950Min: 94970 / Avg: 96010 / Max: 971001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.46, N = 3SE +/- 0.62, N = 3SE +/- 0.29, N = 3126.50127.53126.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 125.84 / Avg: 126.5 / Max: 127.39Min: 126.31 / Avg: 127.53 / Max: 128.26Min: 126.2 / Avg: 126.76 / Max: 127.161. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeLinux 5.10.130Linux 5.15.83Linux 6.1300600900120015001598.701611.591607.401. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.33, N = 3SE +/- 0.39, N = 3SE +/- 0.93, N = 3104.16104.01104.83
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 103.51 / Avg: 104.16 / Max: 104.52Min: 103.25 / Avg: 104.01 / Max: 104.51Min: 103.28 / Avg: 104.83 / Max: 106.51

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1140280420560700SE +/- 0.68, N = 3SE +/- 0.90, N = 3SE +/- 1.31, N = 3625.50623.36620.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1110220330440550Min: 624.44 / Avg: 625.5 / Max: 626.76Min: 621.57 / Avg: 623.36 / Max: 624.36Min: 618.26 / Avg: 620.61 / Max: 622.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessLinux 5.10.130Linux 5.15.83Linux 6.10.2880.5760.8641.1521.44SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.271.281.271. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 1.27 / Avg: 1.27 / Max: 1.27Min: 1.28 / Avg: 1.28 / Max: 1.28Min: 1.27 / Avg: 1.27 / Max: 1.271. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.160120180240300SE +/- 0.30, N = 3SE +/- 0.09, N = 3SE +/- 0.15, N = 3279.8282.0280.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.150100150200250Min: 279.2 / Avg: 279.77 / Max: 280.2Min: 281.9 / Avg: 282.03 / Max: 282.2Min: 280 / Avg: 280.3 / Max: 280.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMLinux 5.10.130Linux 5.15.83Linux 6.170140210280350SE +/- 0.25, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 3307.0307.9305.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMLinux 5.10.130Linux 5.15.83Linux 6.160120180240300Min: 306.7 / Avg: 307 / Max: 307.5Min: 307.6 / Avg: 307.9 / Max: 308.1Min: 305.3 / Avg: 305.47 / Max: 305.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.0301, N = 3SE +/- 0.0357, N = 3SE +/- 0.0848, N = 39.59969.61419.5393
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 9.57 / Avg: 9.6 / Max: 9.66Min: 9.57 / Avg: 9.61 / Max: 9.68Min: 9.39 / Avg: 9.54 / Max: 9.68

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1612182430SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 325.5325.6225.73MIN: 13.67 / MAX: 47.14MIN: 14.62 / MAX: 46.69MIN: 14.41 / MAX: 47.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1612182430Min: 25.48 / Avg: 25.53 / Max: 25.57Min: 25.58 / Avg: 25.62 / Max: 25.7Min: 25.64 / Avg: 25.73 / Max: 25.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeLinux 5.10.130Linux 5.15.83Linux 6.191827364538.5438.2438.361. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Linux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 39.009.028.951. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 9 / Avg: 9 / Max: 9.01Min: 9.02 / Avg: 9.02 / Max: 9.02Min: 8.93 / Avg: 8.95 / Max: 8.971. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KSE +/- 110.56, N = 3SE +/- 108.60, N = 3SE +/- 96.54, N = 315259.8615258.5615144.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KMin: 15083.14 / Avg: 15259.86 / Max: 15463.32Min: 15103.93 / Avg: 15258.56 / Max: 15467.98Min: 14960.03 / Avg: 15144.32 / Max: 15286.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgLinux 5.10.130Linux 5.15.83Linux 6.12K4K6K8K10KSE +/- 15.62, N = 3SE +/- 10.74, N = 3SE +/- 18.76, N = 3113141123611231
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgLinux 5.10.130Linux 5.15.83Linux 6.12K4K6K8K10KMin: 11292 / Avg: 11313.67 / Max: 11344Min: 11216 / Avg: 11235.67 / Max: 11253Min: 11198 / Avg: 11230.67 / Max: 11263

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteLinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KSE +/- 820.33, N = 3SE +/- 6.67, N = 3SE +/- 258.61, N = 32188302184932172371. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteLinux 5.10.130Linux 5.15.83Linux 6.140K80K120K160K200KMin: 217610 / Avg: 218830 / Max: 220390Min: 218480 / Avg: 218493.33 / Max: 218500Min: 216730 / Avg: 217236.67 / Max: 2175801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.21680.43360.65040.86721.084SE +/- 0.001517, N = 3SE +/- 0.003629, N = 3SE +/- 0.002932, N = 30.9563990.9616820.963400MIN: 0.93MIN: 0.93MIN: 0.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.95 / Avg: 0.96 / Max: 0.96Min: 0.96 / Avg: 0.96 / Max: 0.97Min: 0.96 / Avg: 0.96 / Max: 0.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.10.130Linux 5.15.83Linux 6.11.25332.50663.75995.01326.2665SE +/- 0.01, N = 3SE +/- 0.01, N = 6SE +/- 0.01, N = 35.565.535.57MIN: 5.44 / MAX: 6.03MIN: 5.41 / MAX: 10.99MIN: 5.49 / MAX: 6.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 5.55 / Avg: 5.56 / Max: 5.57Min: 5.51 / Avg: 5.53 / Max: 5.55Min: 5.56 / Avg: 5.57 / Max: 5.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationLinux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 0.08, N = 3SE +/- 0.22, N = 3SE +/- 0.29, N = 3159.75159.15160.27
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 159.59 / Avg: 159.75 / Max: 159.88Min: 158.73 / Avg: 159.15 / Max: 159.49Min: 159.89 / Avg: 160.27 / Max: 160.84

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerLinux 5.10.130Linux 5.15.83Linux 6.1100200300400500SE +/- 0.39, N = 3SE +/- 0.43, N = 3SE +/- 0.46, N = 3443.72445.17446.66
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerLinux 5.10.130Linux 5.15.83Linux 6.180160240320400Min: 442.96 / Avg: 443.72 / Max: 444.26Min: 444.65 / Avg: 445.17 / Max: 446.03Min: 445.74 / Avg: 446.66 / Max: 447.22

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionLinux 5.10.130Linux 5.15.83Linux 6.10.69531.39062.08592.78123.4765SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.073.093.081. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.03 / Avg: 3.07 / Max: 3.1Min: 3.09 / Avg: 3.09 / Max: 3.1Min: 3.07 / Avg: 3.08 / Max: 3.081. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.3511.3711.29
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 11.33 / Avg: 11.35 / Max: 11.37Min: 11.35 / Avg: 11.37 / Max: 11.37Min: 11.28 / Avg: 11.29 / Max: 11.31

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.15001000150020002500SE +/- 6.79, N = 3SE +/- 6.63, N = 3SE +/- 4.48, N = 32400.022401.742415.31MIN: 1971.05 / MAX: 2519.43MIN: 2074.14 / MAX: 2532.54MIN: 2186.01 / MAX: 2533.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1400800120016002000Min: 2387.12 / Avg: 2400.02 / Max: 2410.14Min: 2388.48 / Avg: 2401.74 / Max: 2408.5Min: 2407.89 / Avg: 2415.31 / Max: 2423.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinLinux 5.10.130Linux 5.15.83Linux 6.170K140K210K280K350KSE +/- 2642.36, N = 3SE +/- 661.12, N = 3SE +/- 2222.66, N = 33159133139173157231. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinLinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KMin: 310630 / Avg: 315913.33 / Max: 318660Min: 312600 / Avg: 313916.67 / Max: 314680Min: 311350 / Avg: 315723.33 / Max: 3186001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.12004006008001000SE +/- 1.27, N = 3SE +/- 0.73, N = 3SE +/- 2.67, N = 3911.82908.19913.94MIN: 907.5MIN: 904.52MIN: 907.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1160320480640800Min: 909.85 / Avg: 911.82 / Max: 914.19Min: 906.81 / Avg: 908.19 / Max: 909.3Min: 908.9 / Avg: 913.94 / Max: 917.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressLinux 5.10.130Linux 5.15.83Linux 6.17K14K21K28K35KSE +/- 248.24, N = 14SE +/- 3.47, N = 3SE +/- 27.28, N = 334273.5234238.7734059.431. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressLinux 5.10.130Linux 5.15.83Linux 6.16K12K18K24K30KMin: 33926.37 / Avg: 34273.52 / Max: 37488.25Min: 34233.28 / Avg: 34238.77 / Max: 34245.2Min: 34026.37 / Avg: 34059.43 / Max: 34113.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicLinux 5.10.130Linux 5.15.83Linux 6.115000M30000M45000M60000M75000MSE +/- 144216808.24, N = 3SE +/- 219477639.96, N = 3SE +/- 48731133.10, N = 37064976666770923633333704815000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicLinux 5.10.130Linux 5.15.83Linux 6.112000M24000M36000M48000M60000MMin: 70472800000 / Avg: 70649766666.67 / Max: 70935500000Min: 70494200000 / Avg: 70923633333.33 / Max: 71217100000Min: 70400200000 / Avg: 70481500000 / Max: 705687000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -pthread -lmpi_cxx -lmpi

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.10.73131.46262.19392.92523.6565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.253.253.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.24 / Avg: 3.25 / Max: 3.28Min: 3.24 / Avg: 3.25 / Max: 3.27Min: 3.22 / Avg: 3.23 / Max: 3.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.038, N = 3SE +/- 0.045, N = 3SE +/- 0.037, N = 39.0709.0189.073
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 9.01 / Avg: 9.07 / Max: 9.14Min: 8.94 / Avg: 9.02 / Max: 9.09Min: 9.03 / Avg: 9.07 / Max: 9.15

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.15001000150020002500SE +/- 6.93, N = 3SE +/- 9.26, N = 3SE +/- 5.75, N = 32436.302441.672451.10MIN: 2086.23 / MAX: 2572.2MIN: 2123.66 / MAX: 2621.99MIN: 2080.43 / MAX: 2583.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1400800120016002000Min: 2422.47 / Avg: 2436.3 / Max: 2443.91Min: 2423.32 / Avg: 2441.67 / Max: 2452.96Min: 2439.73 / Avg: 2451.1 / Max: 2458.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerLinux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 0.96, N = 3SE +/- 0.40, N = 6SE +/- 0.61, N = 3160.27161.23160.39MIN: 158.5 / MAX: 166.42MIN: 158.43 / MAX: 179.56MIN: 158.59 / MAX: 172.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 158.99 / Avg: 160.27 / Max: 162.14Min: 160.15 / Avg: 161.23 / Max: 162.61Min: 159.16 / Avg: 160.39 / Max: 161.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.97021.94042.91063.88084.851SE +/- 0.00912, N = 3SE +/- 0.01348, N = 3SE +/- 0.00632, N = 34.291534.311784.28618MIN: 4.25MIN: 4.24MIN: 4.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 4.28 / Avg: 4.29 / Max: 4.31Min: 4.29 / Avg: 4.31 / Max: 4.33Min: 4.28 / Avg: 4.29 / Max: 4.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh TimeLinux 5.10.130Linux 5.15.83Linux 6.150100150200250241.49242.25242.921. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinLinux 5.10.130Linux 5.15.83Linux 6.130K60K90K120K150KSE +/- 169.54, N = 3SE +/- 3.33, N = 3SE +/- 283.78, N = 31351031348331343201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KMin: 134900 / Avg: 135103.33 / Max: 135440Min: 134830 / Avg: 134833.33 / Max: 134840Min: 133960 / Avg: 134320 / Max: 1348801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.18, N = 3125.79126.29126.49
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 125.71 / Avg: 125.79 / Max: 125.87Min: 126.08 / Avg: 126.29 / Max: 126.57Min: 126.14 / Avg: 126.49 / Max: 126.73

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinLinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KSE +/- 46.67, N = 3SE +/- 50.00, N = 31449714500144201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinLinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KMin: 14450 / Avg: 14496.67 / Max: 14590Min: 14450 / Avg: 14500 / Max: 146001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsLinux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KSE +/- 6508.49, N = 3SE +/- 3709.52, N = 3SE +/- 8664.27, N = 3911917.94916397.13916843.521. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsLinux 5.10.130Linux 5.15.83Linux 6.1160K320K480K640K800KMin: 904104.5 / Avg: 911917.94 / Max: 924840.97Min: 909197.6 / Avg: 916397.13 / Max: 921548.25Min: 906229.6 / Avg: 916843.52 / Max: 934012.911. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedLinux 5.10.130Linux 5.15.83Linux 6.180160240320400383.51383.96385.50

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.170140210280350SE +/- 0.37, N = 3SE +/- 0.22, N = 3SE +/- 0.49, N = 3318.28317.07318.71
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.160120180240300Min: 317.55 / Avg: 318.28 / Max: 318.68Min: 316.66 / Avg: 317.07 / Max: 317.41Min: 318.07 / Avg: 318.71 / Max: 319.67

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.69781.39562.09342.79123.489SE +/- 0.00480, N = 3SE +/- 0.01196, N = 3SE +/- 0.00478, N = 33.101443.085613.09433MIN: 3.06MIN: 3.05MIN: 3.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.09 / Avg: 3.1 / Max: 3.11Min: 3.07 / Avg: 3.09 / Max: 3.11Min: 3.09 / Avg: 3.09 / Max: 3.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.74531.49062.23592.98123.7265SE +/- 0.00465, N = 3SE +/- 0.00438, N = 3SE +/- 0.00768, N = 33.306063.296333.31244MIN: 3.27MIN: 3.27MIN: 3.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.3 / Avg: 3.31 / Max: 3.31Min: 3.29 / Avg: 3.3 / Max: 3.31Min: 3.3 / Avg: 3.31 / Max: 3.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.15, N = 3123.3123.9123.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 123.2 / Avg: 123.27 / Max: 123.3Min: 123.8 / Avg: 123.9 / Max: 124Min: 123.1 / Avg: 123.3 / Max: 123.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.6610.6410.611. (CXX) g++ options: -O3 -pthread -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 10.62 / Avg: 10.66 / Max: 10.68Min: 10.62 / Avg: 10.64 / Max: 10.66Min: 10.59 / Avg: 10.61 / Max: 10.631. (CXX) g++ options: -O3 -pthread -lm -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.40, N = 3SE +/- 0.30, N = 3SE +/- 0.46, N = 395.4995.0495.151. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 94.93 / Avg: 95.49 / Max: 96.26Min: 94.59 / Avg: 95.04 / Max: 95.62Min: 94.44 / Avg: 95.15 / Max: 96.021. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xLinux 5.10.130Linux 5.15.83Linux 6.1140280420560700SE +/- 0.20, N = 3SE +/- 1.51, N = 3SE +/- 0.34, N = 3640.06641.61638.641. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xLinux 5.10.130Linux 5.15.83Linux 6.1110220330440550Min: 639.67 / Avg: 640.06 / Max: 640.36Min: 640.03 / Avg: 641.61 / Max: 644.63Min: 638.21 / Avg: 638.64 / Max: 639.321. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BLinux 5.10.130Linux 5.15.83Linux 6.1612182430SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 324.4524.5624.46
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BLinux 5.10.130Linux 5.15.83Linux 6.1612182430Min: 24.41 / Avg: 24.45 / Max: 24.48Min: 24.52 / Avg: 24.56 / Max: 24.63Min: 24.42 / Avg: 24.46 / Max: 24.49

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Linux 5.10.130Linux 5.15.83Linux 6.1918273645SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 340.2340.4140.32
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50Linux 5.10.130Linux 5.15.83Linux 6.1816243240Min: 40.2 / Avg: 40.23 / Max: 40.27Min: 40.33 / Avg: 40.41 / Max: 40.45Min: 40.29 / Avg: 40.32 / Max: 40.38

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.59, N = 3SE +/- 0.31, N = 3SE +/- 0.22, N = 3104.00104.04103.58
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 102.81 / Avg: 104 / Max: 104.61Min: 103.72 / Avg: 104.04 / Max: 104.65Min: 103.27 / Avg: 103.58 / Max: 104.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.0549, N = 3SE +/- 0.0282, N = 3SE +/- 0.0209, N = 39.61539.61119.6532
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 9.56 / Avg: 9.62 / Max: 9.73Min: 9.55 / Avg: 9.61 / Max: 9.64Min: 9.61 / Avg: 9.65 / Max: 9.68

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.130060090012001500SE +/- 2.14, N = 3SE +/- 3.87, N = 3SE +/- 1.70, N = 31621.841614.871618.86MIN: 1617.11MIN: 1607.67MIN: 1614.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.130060090012001500Min: 1619.03 / Avg: 1621.84 / Max: 1626.03Min: 1609.9 / Avg: 1614.87 / Max: 1622.49Min: 1616.59 / Avg: 1618.86 / Max: 1622.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptLinux 5.10.130Linux 5.15.83Linux 6.1110220330440550SE +/- 3.92, N = 3SE +/- 5.31, N = 3SE +/- 2.46, N = 3491.90489.83491.851. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptLinux 5.10.130Linux 5.15.83Linux 6.190180270360450Min: 484.07 / Avg: 491.9 / Max: 496.29Min: 479.39 / Avg: 489.83 / Max: 496.76Min: 487.05 / Avg: 491.85 / Max: 495.151. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MLinux 5.10.130Linux 5.15.83Linux 6.12K4K6K8K10KSE +/- 67.20, N = 3SE +/- 34.41, N = 3SE +/- 49.24, N = 39492.29531.39527.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MLinux 5.10.130Linux 5.15.83Linux 6.117003400510068008500Min: 9362.3 / Avg: 9492.2 / Max: 9587Min: 9479.8 / Avg: 9531.33 / Max: 9596.6Min: 9452.2 / Avg: 9527.7 / Max: 9620.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiLinux 5.10.130Linux 5.15.83Linux 6.180160240320400SE +/- 4.81, N = 3SE +/- 4.11, N = 3SE +/- 4.81, N = 3374.94373.55373.451. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiLinux 5.10.130Linux 5.15.83Linux 6.170140210280350Min: 369.39 / Avg: 374.94 / Max: 384.52Min: 369.26 / Avg: 373.55 / Max: 381.77Min: 368.44 / Avg: 373.45 / Max: 383.071. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetLinux 5.10.130Linux 5.15.83Linux 6.150100150200250SE +/- 0.06, N = 3SE +/- 0.39, N = 3SE +/- 0.27, N = 3243.30242.73242.34
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 243.23 / Avg: 243.3 / Max: 243.43Min: 242.16 / Avg: 242.73 / Max: 243.48Min: 241.8 / Avg: 242.34 / Max: 242.66

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Linux 5.10.130Linux 5.15.83Linux 6.11020304050SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 341.9042.0542.06
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50Linux 5.10.130Linux 5.15.83Linux 6.1918273645Min: 41.86 / Avg: 41.9 / Max: 41.94Min: 42.01 / Avg: 42.05 / Max: 42.08Min: 41.97 / Avg: 42.06 / Max: 42.14

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.05, N = 3SE +/- 0.53, N = 3SE +/- 0.47, N = 3108.55108.34108.74
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 108.45 / Avg: 108.55 / Max: 108.64Min: 107.59 / Avg: 108.34 / Max: 109.36Min: 107.92 / Avg: 108.74 / Max: 109.56

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.11632486480SE +/- 0.04, N = 3SE +/- 0.36, N = 3SE +/- 0.32, N = 373.6873.8273.55
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.11428425670Min: 73.61 / Avg: 73.68 / Max: 73.74Min: 73.13 / Avg: 73.82 / Max: 74.33Min: 73 / Avg: 73.55 / Max: 74.1

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadLinux 5.10.130Linux 5.15.83Linux 6.114M28M42M56M70MSE +/- 547664.44, N = 3SE +/- 139131.71, N = 3SE +/- 360247.64, N = 36509423965233649653371751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadLinux 5.10.130Linux 5.15.83Linux 6.111M22M33M44M55MMin: 64021873 / Avg: 65094239.33 / Max: 65823637Min: 65055565 / Avg: 65233649.33 / Max: 65507860Min: 64720246 / Avg: 65337175.33 / Max: 659679501. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.190180270360450SE +/- 0.17, N = 3SE +/- 0.53, N = 3SE +/- 0.59, N = 3417.71416.95416.16MIN: 260.74 / MAX: 433.77MIN: 240.72 / MAX: 434.4MIN: 211.14 / MAX: 443.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.170140210280350Min: 417.51 / Avg: 417.71 / Max: 418.04Min: 415.91 / Avg: 416.95 / Max: 417.68Min: 415.28 / Avg: 416.16 / Max: 417.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetLinux 5.10.130Linux 5.15.83Linux 6.11.2332.4663.6994.9326.165SE +/- 0.01, N = 3SE +/- 0.08, N = 6SE +/- 0.01, N = 35.465.485.46MIN: 5.35 / MAX: 5.85MIN: 5.3 / MAX: 6.07MIN: 5.35 / MAX: 9.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 5.44 / Avg: 5.46 / Max: 5.48Min: 5.38 / Avg: 5.48 / Max: 5.88Min: 5.43 / Avg: 5.46 / Max: 5.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetLinux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 0.44, N = 3SE +/- 0.13, N = 3SE +/- 0.22, N = 3166.55166.90166.31
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 166.05 / Avg: 166.55 / Max: 167.43Min: 166.65 / Avg: 166.9 / Max: 167.04Min: 165.87 / Avg: 166.31 / Max: 166.56

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingLinux 5.10.130Linux 5.15.83Linux 6.120K40K60K80K100KSE +/- 259.98, N = 3SE +/- 150.80, N = 3SE +/- 264.89, N = 38643286735866801. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingLinux 5.10.130Linux 5.15.83Linux 6.115K30K45K60K75KMin: 85940 / Avg: 86431.67 / Max: 86824Min: 86523 / Avg: 86735.33 / Max: 87027Min: 86356 / Avg: 86680 / Max: 872051. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinLinux 5.10.130Linux 5.15.83Linux 6.114002800420056007000SE +/- 2.21, N = 3SE +/- 11.67, N = 3SE +/- 2.16, N = 36572.816594.186571.661. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinLinux 5.10.130Linux 5.15.83Linux 6.111002200330044005500Min: 6570.17 / Avg: 6572.81 / Max: 6577.21Min: 6580.81 / Avg: 6594.18 / Max: 6617.42Min: 6567.38 / Avg: 6571.66 / Max: 6574.331. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 386.6686.9486.88
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.11632486480Min: 86.47 / Avg: 86.66 / Max: 86.88Min: 86.81 / Avg: 86.94 / Max: 87.1Min: 86.82 / Avg: 86.88 / Max: 86.94

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.12004006008001000SE +/- 2.14, N = 3SE +/- 1.01, N = 3SE +/- 1.41, N = 3909.63912.07909.15MIN: 903.56MIN: 908.55MIN: 904.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1160320480640800Min: 905.88 / Avg: 909.63 / Max: 913.28Min: 910.86 / Avg: 912.07 / Max: 914.07Min: 906.59 / Avg: 909.15 / Max: 911.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.28190.56380.84571.12761.4095SE +/- 0.00257, N = 3SE +/- 0.00241, N = 3SE +/- 0.00376, N = 31.252711.251741.24871MIN: 1.22MIN: 1.22MIN: 1.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 1.25 / Avg: 1.25 / Max: 1.26Min: 1.25 / Avg: 1.25 / Max: 1.26Min: 1.24 / Avg: 1.25 / Max: 1.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1510152025SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 319.1119.1519.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1510152025Min: 19.11 / Avg: 19.11 / Max: 19.12Min: 19.12 / Avg: 19.15 / Max: 19.19Min: 19.13 / Avg: 19.17 / Max: 19.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3106.13106.38106.46
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 106.03 / Avg: 106.13 / Max: 106.29Min: 106.26 / Avg: 106.38 / Max: 106.47Min: 106.43 / Avg: 106.46 / Max: 106.5

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.35, N = 3SE +/- 0.10, N = 3SE +/- 0.15, N = 398.698.698.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 98.2 / Avg: 98.6 / Max: 99.3Min: 98.5 / Avg: 98.6 / Max: 98.8Min: 98.1 / Avg: 98.3 / Max: 98.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.150100150200250SE +/- 0.10, N = 3SE +/- 0.45, N = 3SE +/- 0.45, N = 3219.36219.01218.69
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 219.2 / Avg: 219.36 / Max: 219.55Min: 218.12 / Avg: 219.01 / Max: 219.55Min: 217.81 / Avg: 218.69 / Max: 219.26

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.43, N = 3SE +/- 0.31, N = 3SE +/- 0.38, N = 3144.24143.81144.21
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 143.39 / Avg: 144.24 / Max: 144.74Min: 143.25 / Avg: 143.81 / Max: 144.33Min: 143.45 / Avg: 144.21 / Max: 144.64

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 313.2713.3113.27
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 13.27 / Avg: 13.27 / Max: 13.27Min: 13.26 / Avg: 13.31 / Max: 13.34Min: 13.25 / Avg: 13.27 / Max: 13.3

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Linux 5.10.130Linux 5.15.83Linux 6.11020304050SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 343.5043.4143.37
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50Linux 5.10.130Linux 5.15.83Linux 6.1918273645Min: 43.47 / Avg: 43.5 / Max: 43.53Min: 43.37 / Avg: 43.41 / Max: 43.45Min: 43.32 / Avg: 43.37 / Max: 43.42

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingLinux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 1.58, N = 3SE +/- 1.18, N = 3SE +/- 1.25, N = 3194.36194.88194.301. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 192.57 / Avg: 194.36 / Max: 197.5Min: 193.6 / Avg: 194.88 / Max: 197.23Min: 192.67 / Avg: 194.3 / Max: 196.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.01, N = 3SE +/- 0.14, N = 3SE +/- 0.10, N = 375.2675.0475.26
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamLinux 5.10.130Linux 5.15.83Linux 6.11428425670Min: 75.25 / Avg: 75.26 / Max: 75.28Min: 74.86 / Avg: 75.04 / Max: 75.31Min: 75.08 / Avg: 75.26 / Max: 75.39

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.11224364860SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 355.4255.5855.44
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.11122334455Min: 55.26 / Avg: 55.42 / Max: 55.72Min: 55.41 / Avg: 55.58 / Max: 55.82Min: 55.26 / Avg: 55.44 / Max: 55.76

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSELinux 5.10.130Linux 5.15.83Linux 6.11326395265SE +/- 0.29, N = 3SE +/- 0.25, N = 3SE +/- 0.15, N = 355.7355.6655.82
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSELinux 5.10.130Linux 5.15.83Linux 6.11122334455Min: 55.17 / Avg: 55.73 / Max: 56.15Min: 55.18 / Avg: 55.66 / Max: 56.02Min: 55.63 / Avg: 55.82 / Max: 56.11

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1816243240SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 336.4436.5036.54
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.1816243240Min: 36.41 / Avg: 36.44 / Max: 36.47Min: 36.41 / Avg: 36.5 / Max: 36.65Min: 36.46 / Avg: 36.54 / Max: 36.7

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 310.4310.4110.401. (CXX) g++ options: -O3 -pthread -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 10.41 / Avg: 10.43 / Max: 10.44Min: 10.39 / Avg: 10.41 / Max: 10.43Min: 10.37 / Avg: 10.4 / Max: 10.431. (CXX) g++ options: -O3 -pthread -lm -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 392.2191.9992.05
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 92.05 / Avg: 92.21 / Max: 92.32Min: 91.82 / Avg: 91.99 / Max: 92.12Min: 91.99 / Avg: 92.05 / Max: 92.12

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.16K12K18K24K30KSE +/- 8.11, N = 3SE +/- 21.40, N = 3SE +/- 9.70, N = 32840928417284791. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.15K10K15K20K25KMin: 28396 / Avg: 28409.33 / Max: 28424Min: 28393 / Avg: 28417.33 / Max: 28460Min: 28464 / Avg: 28478.67 / Max: 284971. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.15001000150020002500SE +/- 0.67, N = 3SE +/- 0.58, N = 32178217621731. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.1400800120016002000Min: 2177 / Avg: 2178.33 / Max: 2179Min: 2172 / Avg: 2173 / Max: 21741. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Linux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.40, N = 3SE +/- 0.50, N = 3SE +/- 0.17, N = 3145.01144.71145.041. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Linux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 144.4 / Avg: 145.01 / Max: 145.77Min: 144.13 / Avg: 144.71 / Max: 145.71Min: 144.78 / Avg: 145.04 / Max: 145.351. (CXX) g++ options: -O3 -fPIC -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.160K120K180K240K300KSE +/- 239.39, N = 3SE +/- 192.14, N = 3SE +/- 92.98, N = 32846992848532842351. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KMin: 284422 / Avg: 284699.33 / Max: 285176Min: 284534 / Avg: 284852.67 / Max: 285198Min: 284063 / Avg: 284235.33 / Max: 2843821. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetLinux 5.10.130Linux 5.15.83Linux 6.160120180240300SE +/- 0.53, N = 3SE +/- 0.66, N = 3SE +/- 0.40, N = 3278.63279.05279.22
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetLinux 5.10.130Linux 5.15.83Linux 6.150100150200250Min: 277.89 / Avg: 278.63 / Max: 279.65Min: 278.3 / Avg: 279.05 / Max: 280.36Min: 278.81 / Avg: 279.22 / Max: 280.02

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldLinux 5.10.130Linux 5.15.83Linux 6.160120180240300SE +/- 0.38, N = 3SE +/- 0.19, N = 3SE +/- 0.19, N = 3256.43256.19255.89
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldLinux 5.10.130Linux 5.15.83Linux 6.150100150200250Min: 255.9 / Avg: 256.43 / Max: 257.17Min: 255.88 / Avg: 256.19 / Max: 256.54Min: 255.51 / Avg: 255.89 / Max: 256.16

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultLinux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 314.3714.4014.381. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultLinux 5.10.130Linux 5.15.83Linux 6.148121620Min: 14.36 / Avg: 14.37 / Max: 14.38Min: 14.4 / Avg: 14.4 / Max: 14.4Min: 14.36 / Avg: 14.38 / Max: 14.41. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KSE +/- 103.12, N = 3SE +/- 62.00, N = 3SE +/- 131.54, N = 32399352403112404301. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.140K80K120K160K200KMin: 239786 / Avg: 239935 / Max: 240133Min: 240187 / Avg: 240311 / Max: 240374Min: 240252 / Avg: 240430.33 / Max: 2406871. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3130.21130.40130.47
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 130.15 / Avg: 130.21 / Max: 130.25Min: 130.35 / Avg: 130.4 / Max: 130.49Min: 130.43 / Avg: 130.47 / Max: 130.53

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomLinux 5.10.130Linux 5.15.83Linux 6.1500K1000K1500K2000K2500KSE +/- 16550.67, N = 3SE +/- 8919.50, N = 3SE +/- 10651.17, N = 32132608212935921335281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomLinux 5.10.130Linux 5.15.83Linux 6.1400K800K1200K1600K2000KMin: 2104716 / Avg: 2132608 / Max: 2161991Min: 2119001 / Avg: 2129359 / Max: 2147116Min: 2117476 / Avg: 2133527.67 / Max: 21536821. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetLinux 5.10.130Linux 5.15.83Linux 6.150100150200250SE +/- 0.47, N = 3SE +/- 0.13, N = 3SE +/- 0.28, N = 3211.25211.47211.06
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 210.32 / Avg: 211.25 / Max: 211.83Min: 211.29 / Avg: 211.47 / Max: 211.71Min: 210.5 / Avg: 211.06 / Max: 211.43

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Linux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 0.36, N = 3SE +/- 0.38, N = 3SE +/- 0.47, N = 3174.72174.66174.981. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Linux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 174.02 / Avg: 174.72 / Max: 175.21Min: 173.91 / Avg: 174.66 / Max: 175.11Min: 174.04 / Avg: 174.98 / Max: 175.561. (CXX) g++ options: -O3 -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.130060090012001500SE +/- 0.83, N = 3SE +/- 0.39, N = 3SE +/- 0.44, N = 31613.121611.141614.07MIN: 1608.28MIN: 1608.25MIN: 1609.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.130060090012001500Min: 1612.18 / Avg: 1613.12 / Max: 1614.78Min: 1610.46 / Avg: 1611.14 / Max: 1611.82Min: 1613.31 / Avg: 1614.07 / Max: 1614.821. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.12180.24360.36540.48720.609SE +/- 0.000704, N = 3SE +/- 0.000263, N = 3SE +/- 0.000200, N = 30.5406540.5404520.541430MIN: 0.53MIN: 0.53MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.54 / Max: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughLinux 5.10.130Linux 5.15.83Linux 6.13691215SE +/- 0.0023, N = 3SE +/- 0.0055, N = 3SE +/- 0.0088, N = 39.32259.32679.31031. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughLinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 9.32 / Avg: 9.32 / Max: 9.33Min: 9.32 / Avg: 9.33 / Max: 9.34Min: 9.29 / Avg: 9.31 / Max: 9.321. (CXX) g++ options: -O3 -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.11.32532.65063.97595.30126.6265SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 35.885.885.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 5.84 / Avg: 5.88 / Max: 5.95Min: 5.84 / Avg: 5.88 / Max: 5.94Min: 5.86 / Avg: 5.89 / Max: 5.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.180160240320400SE +/- 0.42, N = 3SE +/- 0.29, N = 3SE +/- 0.42, N = 3373.74373.54374.17
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.170140210280350Min: 372.91 / Avg: 373.74 / Max: 374.22Min: 373.24 / Avg: 373.54 / Max: 374.12Min: 373.63 / Avg: 374.17 / Max: 374.99

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.11530456075SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.06, N = 365.966.065.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMLinux 5.10.130Linux 5.15.83Linux 6.11326395265Min: 65.8 / Avg: 65.87 / Max: 66Min: 65.7 / Avg: 66 / Max: 66.2Min: 65.8 / Avg: 65.9 / Max: 661. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -lpthread -ldl -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.51681.03361.55042.06722.584SE +/- 0.00305, N = 3SE +/- 0.00109, N = 3SE +/- 0.00145, N = 32.296362.293852.29708MIN: 2.22MIN: 2.19MIN: 2.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 2.29 / Avg: 2.3 / Max: 2.3Min: 2.29 / Avg: 2.29 / Max: 2.3Min: 2.29 / Avg: 2.3 / Max: 2.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.16K12K18K24K30KSE +/- 28.26, N = 3SE +/- 5.24, N = 32917229206291651. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.15K10K15K20K25KMin: 29131 / Avg: 29171.67 / Max: 29226Min: 29196 / Avg: 29205.67 / Max: 292141. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Linux 5.10.130Linux 5.15.83Linux 6.148121620SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 314.4414.4314.421. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Linux 5.10.130Linux 5.15.83Linux 6.148121620Min: 14.43 / Avg: 14.44 / Max: 14.46Min: 14.37 / Avg: 14.43 / Max: 14.53Min: 14.39 / Avg: 14.42 / Max: 14.481. (CXX) g++ options: -O3 -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.10.11190.22380.33570.44760.5595SE +/- 0.000232, N = 3SE +/- 0.000206, N = 3SE +/- 0.000335, N = 30.4966130.4972480.496776MIN: 0.48MIN: 0.48MIN: 0.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.115003000450060007500SE +/- 10.90, N = 3SE +/- 1.45, N = 37055704670481. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.112002400360048006000Min: 7039 / Avg: 7055.33 / Max: 7076Min: 7044 / Avg: 7046.33 / Max: 70491. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathLinux 5.10.130Linux 5.15.83Linux 6.112K24K36K48K60KSE +/- 337.57, N = 3SE +/- 330.15, N = 3SE +/- 321.30, N = 355351.5855415.0155344.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathLinux 5.10.130Linux 5.15.83Linux 6.110K20K30K40K50KMin: 55010.17 / Avg: 55351.58 / Max: 56026.7Min: 55010.75 / Avg: 55415.01 / Max: 56069.28Min: 54938.56 / Avg: 55344.32 / Max: 55978.741. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 38.028.018.02MIN: 4.07 / MAX: 22.04MIN: 5.89 / MAX: 26.32MIN: 6 / MAX: 25.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.13691215Min: 8.01 / Avg: 8.02 / Max: 8.03Min: 8.01 / Avg: 8.01 / Max: 8.02Min: 8.01 / Avg: 8.02 / Max: 8.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.1400800120016002000SE +/- 1.15, N = 3SE +/- 0.67, N = 3SE +/- 1.00, N = 31826182618241. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.130060090012001500Min: 1824 / Avg: 1826 / Max: 1828Min: 1825 / Avg: 1825.67 / Max: 1827Min: 1823 / Avg: 1824 / Max: 18261. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestLinux 5.10.130Linux 5.15.83Linux 6.120406080100SE +/- 0.20, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 396.2196.3196.30
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 95.81 / Avg: 96.21 / Max: 96.44Min: 96.13 / Avg: 96.31 / Max: 96.46Min: 96.24 / Avg: 96.3 / Max: 96.41

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KSE +/- 254.40, N = 3SE +/- 19.15, N = 3SE +/- 262.34, N = 32342162341422343711. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.140K80K120K160K200KMin: 233952 / Avg: 234216.33 / Max: 234725Min: 234106 / Avg: 234142.33 / Max: 234171Min: 233942 / Avg: 234370.67 / Max: 2348471. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.12004006008001000SE +/- 0.49, N = 3SE +/- 0.12, N = 3SE +/- 0.42, N = 31145.901144.821145.18
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.12004006008001000Min: 1145.22 / Avg: 1145.9 / Max: 1146.84Min: 1144.61 / Avg: 1144.82 / Max: 1145.02Min: 1144.67 / Avg: 1145.18 / Max: 1146.01

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.1400800120016002000SE +/- 1.36, N = 3SE +/- 0.98, N = 3SE +/- 1.06, N = 31991.751993.581992.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.130060090012001500Min: 1990 / Avg: 1991.75 / Max: 1994.43Min: 1991.88 / Avg: 1993.58 / Max: 1995.27Min: 1990.51 / Avg: 1992.23 / Max: 1994.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumLinux 5.10.130Linux 5.15.83Linux 6.11632486480SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 372.1672.1272.101. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumLinux 5.10.130Linux 5.15.83Linux 6.11428425670Min: 72.15 / Avg: 72.16 / Max: 72.17Min: 72.08 / Avg: 72.12 / Max: 72.19Min: 72.06 / Avg: 72.1 / Max: 72.131. (CXX) g++ options: -O3 -flto -pthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.130K60K90K120K150KSE +/- 58.33, N = 3SE +/- 69.67, N = 3SE +/- 27.67, N = 31465831465701466921. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.130K60K90K120K150KMin: 146475 / Avg: 146583.33 / Max: 146675Min: 146432 / Avg: 146570.33 / Max: 146654Min: 146663 / Avg: 146691.67 / Max: 1467471. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoLinux 5.10.130Linux 5.15.83Linux 6.14K8K12K16K20KSE +/- 47.41, N = 3SE +/- 83.58, N = 3SE +/- 66.53, N = 318742.6718733.1018727.611. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoLinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KMin: 18654.25 / Avg: 18742.67 / Max: 18816.54Min: 18566.43 / Avg: 18733.1 / Max: 18827.44Min: 18599.29 / Avg: 18727.61 / Max: 18822.231. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.130060090012001500SE +/- 7.68, N = 3SE +/- 8.21, N = 3SE +/- 5.43, N = 31357.371356.351357.10MIN: 1303.04 / MAX: 1429.1MIN: 1238.64 / MAX: 1424.43MIN: 1311.47 / MAX: 1435.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPULinux 5.10.130Linux 5.15.83Linux 6.12004006008001000Min: 1342.43 / Avg: 1357.37 / Max: 1367.9Min: 1340.43 / Avg: 1356.35 / Max: 1367.81Min: 1346.33 / Avg: 1357.1 / Max: 1363.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveLinux 5.10.130Linux 5.15.83Linux 6.10.21090.42180.63270.84361.0545SE +/- 0.0001, N = 3SE +/- 0.0001, N = 3SE +/- 0.0006, N = 30.93710.93750.93681. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 0.94 / Avg: 0.94 / Max: 0.94Min: 0.94 / Avg: 0.94 / Max: 0.94Min: 0.94 / Avg: 0.94 / Max: 0.941. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastLinux 5.10.130Linux 5.15.83Linux 6.14080120160200SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3193.13193.02192.991. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastLinux 5.10.130Linux 5.15.83Linux 6.14080120160200Min: 193.06 / Avg: 193.13 / Max: 193.2Min: 192.96 / Avg: 193.02 / Max: 193.13Min: 192.94 / Avg: 192.99 / Max: 193.051. (CXX) g++ options: -O3 -flto -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathLinux 5.10.130Linux 5.15.83Linux 6.112K24K36K48K60KSE +/- 476.25, N = 3SE +/- 503.37, N = 3SE +/- 647.24, N = 355915.3655890.2355875.181. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathLinux 5.10.130Linux 5.15.83Linux 6.110K20K30K40K50KMin: 54995.07 / Avg: 55915.36 / Max: 56588.2Min: 54883.8 / Avg: 55890.23 / Max: 56415.01Min: 54604.91 / Avg: 55875.18 / Max: 56726.121. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.17K14K21K28K35KSE +/- 21.06, N = 3SE +/- 4.51, N = 3SE +/- 5.29, N = 33478634801347791. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.16K12K18K24K30KMin: 34755 / Avg: 34785.67 / Max: 34826Min: 34792 / Avg: 34801 / Max: 34806Min: 34769 / Avg: 34779 / Max: 347871. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 0.12, N = 3SE +/- 0.30, N = 3SE +/- 0.09, N = 3144.25144.34144.29
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyLinux 5.10.130Linux 5.15.83Linux 6.1306090120150Min: 144.02 / Avg: 144.25 / Max: 144.41Min: 143.77 / Avg: 144.34 / Max: 144.76Min: 144.12 / Avg: 144.29 / Max: 144.41

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.12K4K6K8K10KSE +/- 5.51, N = 3SE +/- 9.45, N = 38633862886331. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.115003000450060007500Min: 8624 / Avg: 8633 / Max: 8643Min: 8619 / Avg: 8633 / Max: 86511. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.1400800120016002000SE +/- 0.67, N = 3SE +/- 1.33, N = 3SE +/- 0.58, N = 31778177817791. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.130060090012001500Min: 1777 / Avg: 1778.33 / Max: 1779Min: 1777 / Avg: 1778.33 / Max: 1781Min: 1778 / Avg: 1779 / Max: 17801. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.116003200480064008000SE +/- 2.33, N = 3SE +/- 3.79, N = 3SE +/- 2.52, N = 37242724672451. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerLinux 5.10.130Linux 5.15.83Linux 6.113002600390052006500Min: 7238 / Avg: 7241.67 / Max: 7246Min: 7239 / Avg: 7246 / Max: 7252Min: 7240 / Avg: 7245 / Max: 72481. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandLinux 5.10.130Linux 5.15.83Linux 6.150K100K150K200K250KSE +/- 0.71, N = 3SE +/- 1.57, N = 3SE +/- 12.63, N = 3252390.56252393.88252372.291. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandLinux 5.10.130Linux 5.15.83Linux 6.140K80K120K160K200KMin: 252389.44 / Avg: 252390.56 / Max: 252391.87Min: 252391.26 / Avg: 252393.88 / Max: 252396.69Min: 252347.35 / Avg: 252372.29 / Max: 252388.261. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Linux 5.10.130Linux 5.15.83Linux 6.111K22K33K44K55KSE +/- 129.98, N = 3SE +/- 118.77, N = 3SE +/- 144.43, N = 350605.5650607.7450609.281. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -lbsd -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Linux 5.10.130Linux 5.15.83Linux 6.19K18K27K36K45KMin: 50462.34 / Avg: 50605.56 / Max: 50865.05Min: 50471.19 / Avg: 50607.74 / Max: 50844.34Min: 50456.07 / Avg: 50609.28 / Max: 50897.951. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -lbsd -pthread

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipLinux 5.10.130Linux 5.15.83Linux 6.10.811.622.433.244.05SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.63.63.6
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipLinux 5.10.130Linux 5.15.83Linux 6.1246810Min: 3.6 / Avg: 3.6 / Max: 3.6Min: 3.6 / Avg: 3.6 / Max: 3.6Min: 3.6 / Avg: 3.6 / Max: 3.6

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.10.130Linux 5.15.83Linux 6.11.11382.22763.34144.45525.569SE +/- 0.01, N = 3SE +/- 0.04, N = 6SE +/- 0.01, N = 34.954.954.95MIN: 4.85 / MAX: 5.91MIN: 4.8 / MAX: 63.45MIN: 4.86 / MAX: 5.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.10.130Linux 5.15.83Linux 6.1246810Min: 4.94 / Avg: 4.95 / Max: 4.96Min: 4.89 / Avg: 4.95 / Max: 5.16Min: 4.94 / Avg: 4.95 / Max: 4.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.10.130Linux 5.15.83Linux 6.10.1080.2160.3240.4320.54SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.480.480.481. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.10.130Linux 5.15.83Linux 6.112345Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.48 / Avg: 0.48 / Max: 0.481. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Linux 5.10.130Linux 5.15.83Linux 6.1246810SE +/- 0.04, N = 3SE +/- 0.22, N = 6SE +/- 0.01, N = 38.098.057.85MIN: 7.89 / MAX: 9.29MIN: 7.68 / MAX: 9.97MIN: 7.74 / MAX: 10.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Linux 5.10.130Linux 5.15.83Linux 6.13691215Min: 8.03 / Avg: 8.09 / Max: 8.16Min: 7.77 / Avg: 8.05 / Max: 9.13Min: 7.83 / Avg: 7.85 / Max: 7.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Linux 5.10.130Linux 5.15.83Linux 6.1714212835SE +/- 0.13, N = 3SE +/- 1.00, N = 6SE +/- 0.10, N = 326.9527.6926.52MIN: 26.65 / MAX: 27.51MIN: 26.21 / MAX: 33.47MIN: 26.22 / MAX: 27.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Linux 5.10.130Linux 5.15.83Linux 6.1612182430Min: 26.81 / Avg: 26.95 / Max: 27.2Min: 26.34 / Avg: 27.69 / Max: 32.67Min: 26.35 / Avg: 26.52 / Max: 26.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityLinux 5.10.130Linux 5.15.83Linux 6.14K8K12K16K20KSE +/- 302.66, N = 15SE +/- 80.02, N = 3SE +/- 112.51, N = 310160.9117786.1716990.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityLinux 5.10.130Linux 5.15.83Linux 6.13K6K9K12K15KMin: 9193.23 / Avg: 10160.91 / Max: 14180.14Min: 17675.1 / Avg: 17786.17 / Max: 17941.49Min: 16858.08 / Avg: 16990.75 / Max: 17214.491. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheLinux 5.10.130Linux 5.15.83Linux 6.1306090120150SE +/- 2.89, N = 15SE +/- 6.07, N = 12SE +/- 2.00, N = 15116.33128.31115.591. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheLinux 5.10.130Linux 5.15.83Linux 6.120406080100Min: 97.79 / Avg: 116.33 / Max: 133.09Min: 104.69 / Avg: 128.31 / Max: 184.62Min: 101.95 / Avg: 115.59 / Max: 129.191. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexLinux 5.10.130Linux 5.15.83Linux 6.1500K1000K1500K2000K2500KSE +/- 33488.35, N = 15SE +/- 32600.02, N = 3SE +/- 9882.04, N = 31498828.702364278.752433807.141. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexLinux 5.10.130Linux 5.15.83Linux 6.1400K800K1200K1600K2000KMin: 1345246.04 / Avg: 1498828.7 / Max: 1801951.49Min: 2301592.09 / Avg: 2364278.75 / Max: 2411148.54Min: 2423753.27 / Avg: 2433807.14 / Max: 2453570.221. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KSE +/- 19609.96, N = 13SE +/- 643.98, N = 3SE +/- 1471.72, N = 3978963.311123953.551130454.031. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1Linux 5.10.130Linux 5.15.83Linux 6.1200K400K600K800K1000KMin: 746907.7 / Avg: 978963.31 / Max: 1004478.88Min: 1122665.73 / Avg: 1123953.55 / Max: 1124614.24Min: 1127534.01 / Avg: 1130454.03 / Max: 1132234.941. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

302 Results Shown

Stress-NG:
  Context Switching
  System V Message Passing
  MEMFD
  Malloc
  NUMA
ctx_clock
PostgreSQL:
  100 - 500 - Read Only
  100 - 500 - Read Only - Average Latency
Stress-NG:
  MMAP
  Mutex
  SENDFILE
PostgreSQL:
  100 - 250 - Read Only
  100 - 250 - Read Only - Average Latency
JPEG XL Decoding libjxl
PostgreSQL:
  100 - 500 - Read Write - Average Latency
  100 - 500 - Read Write
GraphicsMagick
Dragonflydb
PostgreSQL:
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
Stress-NG
GraphicsMagick
Stress-NG
Dragonflydb:
  200 - 1:5
  200 - 1:1
  50 - 5:1
EnCodec
Facebook RocksDB
Dragonflydb
EnCodec:
  1.5 kbps
  24 kbps
  3 kbps
Facebook RocksDB
oneDNN
AOM AV1
PostgreSQL:
  100 - 250 - Read Write
  100 - 250 - Read Write - Average Latency
Stargate Digital Audio Workstation
JPEG XL libjxl
Facebook RocksDB
JPEG XL libjxl:
  JPEG - 80
  JPEG - 90
AOM AV1
Facebook RocksDB
C-Blosc
JPEG XL Decoding libjxl
AOM AV1
PostgreSQL:
  100 - 100 - Read Write
  100 - 100 - Read Write - Average Latency
JPEG XL libjxl
spaCy
libavif avifenc
Stargate Digital Audio Workstation
GraphicsMagick
ClickHouse
NCNN
C-Blosc
Facebook RocksDB
ClickHouse
Timed Linux Kernel Compilation
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
Stargate Digital Audio Workstation
Stress-NG
Stargate Digital Audio Workstation
Mobile Neural Network
Timed Linux Kernel Compilation
Stargate Digital Audio Workstation
NCNN
Mobile Neural Network:
  MobileNetV2_224
  squeezenetv1.1
Numenta Anomaly Benchmark
GraphicsMagick
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
Timed Erlang/OTP Compilation
oneDNN
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OSPRay Studio
NCNN
GraphicsMagick
Timed Godot Game Engine Compilation
NCNN
srsRAN
SVT-AV1
Stargate Digital Audio Workstation
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Vehicle Detection FP16 - CPU
  Vehicle Detection FP16 - CPU
JPEG XL libjxl
Stargate Digital Audio Workstation
JPEG XL libjxl
Numenta Anomaly Benchmark
SVT-AV1
NCNN
libavif avifenc
OpenVINO
Stress-NG
Stargate Digital Audio Workstation
Timed CPython Compilation
libavif avifenc
Mobile Neural Network
7-Zip Compression
NCNN
Mobile Neural Network
srsRAN
Mobile Neural Network
OSPRay Studio
TensorFlow
OSPRay Studio
Neural Magic DeepSparse
Mobile Neural Network
Cpuminer-Opt
Timed Node.js Compilation
Neural Magic DeepSparse
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
FLAC Audio Encoding
Numenta Anomaly Benchmark
SVT-AV1
GraphicsMagick
OpenVINO
Neural Magic DeepSparse
OSPRay Studio
Timed PHP Compilation
NCNN
OpenVINO
NCNN
OpenVINO
Neural Magic DeepSparse
ClickHouse
OpenVINO
TensorFlow
miniBUDE:
  OpenMP - BM1:
    GFInst/s
    Billion Interactions/s
Cpuminer-Opt:
  Ringcoin
  Blake-2 S
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
TensorFlow
oneDNN
Neural Magic DeepSparse
srsRAN
AOM AV1
srsRAN
GraphicsMagick
OpenFOAM
NCNN
OSPRay Studio
srsRAN
OpenVINO
libavif avifenc
Mobile Neural Network
NCNN
SVT-AV1
OpenRadioss
Xmrig
OpenVINO
BRL-CAD
Numenta Anomaly Benchmark
Neural Magic DeepSparse
oneDNN
SVT-AV1
srsRAN
SVT-AV1
NCNN
Stress-NG
Cpuminer-Opt
SVT-AV1
OpenFOAM
Neural Magic DeepSparse
OpenVINO
WebP Image Encode
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB SISO 256-QAM
Neural Magic DeepSparse
OpenVINO
OpenFOAM
WebP Image Encode
OpenVINO
spaCy
Cpuminer-Opt
oneDNN
NCNN
OpenRadioss:
  Rubber O-Ring Seal Installation
  INIVOL and Fluid Structure Interaction Drop Container
WebP Image Encode
Y-Cruncher
OpenVINO
Cpuminer-Opt
oneDNN
Stress-NG
nekRS
OpenVINO
Numenta Anomaly Benchmark
OpenVINO
NCNN
oneDNN
OpenFOAM
Cpuminer-Opt
TensorFlow
Cpuminer-Opt
Stress-NG
Timed CPython Compilation
Blender
oneDNN:
  IP Shapes 3D - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
srsRAN
LAMMPS Molecular Dynamics Simulator
SVT-AV1
Cpuminer-Opt
Y-Cruncher
TensorFlow
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
oneDNN
Cpuminer-Opt
Xmrig
Cpuminer-Opt
TensorFlow:
  CPU - 64 - AlexNet
  CPU - 64 - ResNet-50
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
Facebook RocksDB
OpenVINO
NCNN
TensorFlow
7-Zip Compression
Cpuminer-Opt
Neural Magic DeepSparse
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
OpenVINO
Blender
srsRAN
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream
TensorFlow
Stress-NG
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
Numenta Anomaly Benchmark
Neural Magic DeepSparse
LAMMPS Molecular Dynamics Simulator
Neural Magic DeepSparse
OSPRay Studio:
  1 - 1080p - 16 - Path Tracer
  3 - 1080p - 1 - Path Tracer
libavif avifenc
OSPRay Studio
TensorFlow
OpenRadioss
WebP Image Encode
OSPRay Studio
TensorFlow
Facebook RocksDB
TensorFlow
Primesieve
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
ASTC Encoder
OpenVINO
Blender
srsRAN
oneDNN
OSPRay Studio
Primesieve
oneDNN
OSPRay Studio
Stress-NG
OpenVINO
OSPRay Studio
OpenRadioss
OSPRay Studio
Blender
OpenVINO
ASTC Encoder
OSPRay Studio
Stress-NG
OpenVINO
ASTC Encoder:
  Exhaustive
  Fast
Stress-NG
OSPRay Studio
Blender
OSPRay Studio:
  3 - 4K - 1 - Path Tracer
  1 - 1080p - 1 - Path Tracer
  2 - 4K - 1 - Path Tracer
Stress-NG
Aircrack-ng
Natron
NCNN
WebP Image Encode
NCNN:
  CPU - resnet18
  CPU - vgg16
Stress-NG:
  Socket Activity
  CPU Cache
  Futex
Dragonflydb