AMD Ryzen 9 7900X Linux

AMD Ryzen 9 7900X 12-Core testing with a ASRock X670E PG Lightning (1.11 BIOS) and XFX AMD Radeon RX 6400 4GB on Ubuntu 22.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2211114-PTS-AMDRYZEN44&sor&grr.

AMD Ryzen 9 7900X LinuxProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionabAMD Ryzen 9 7900X 12-Core @ 5.73GHz (12 Cores / 24 Threads)ASRock X670E PG Lightning (1.11 BIOS)AMD Device 14d832GB1000GB Western Digital WDS100T1X0E-00AFY0XFX AMD Radeon RX 6400 4GB (2320/1000MHz)AMD Navi 21/23ASUS MG28URealtek RTL8125 2.5GbEUbuntu 22.105.19.0-23-generic (x86_64)GNOME Shell 43.0X Server + Wayland4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)1.3.224GCC 12.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: amd-pstate schedutil (Boost: Enabled) - CPU Microcode: 0xa601203 Python Details- Python 3.10.7Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AMD Ryzen 9 7900X Linuxopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timeopenfoam: motorBike - Execution Timebrl-cad: VGR Performance Metrictensorflow: CPU - 256 - ResNet-50blender: Barbershop - CPU-Onlytensorflow: CPU - 512 - GoogLeNetnekrs: TurboPipe Periodicsmhasher: SHA3-256smhasher: SHA3-256openradioss: INIVOL and Fluid Structure Interaction Drop Containerbuild-nodejs: Time To Compilejpegxl: JPEG - 100tensorflow: CPU - 256 - GoogLeNetjpegxl: PNG - 100blender: Pabellon Barcelona - CPU-Onlyopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenradioss: Bird Strike on Windshieldffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadbuild-python: Released Build, PGO + LTO Optimizedffmpeg: libx265 - Platformffmpeg: libx265 - Platformffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandtensorflow: CPU - 64 - ResNet-50blender: Classroom - CPU-Onlyffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadtensorflow: CPU - 512 - AlexNetpgbench: 100 - 1 - Read Only - Average Latencypgbench: 100 - 1 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writepgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 1 - Read Writepgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Onlypgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Onlypgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 1 - Read Writepgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Onlyminibude: OpenMP - BM2minibude: OpenMP - BM2ffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandffmpeg: libx264 - Platformffmpeg: libx264 - Platformmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetopenradioss: Bumper Beamjpegxl: JPEG - 80jpegxl: PNG - 80tensorflow: CPU - 32 - ResNet-50nginx: 1000nginx: 500nginx: 200nginx: 100blender: Fishy Cat - CPU-Onlyavifenc: 0tensorflow: CPU - 256 - AlexNetopenradioss: Rubber O-Ring Seal Installationjpegxl: JPEG - 90xmrig: Monero - 1Maom-av1: Speed 4 Two-Pass - Bosphorus 4Kjpegxl: PNG - 90onednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheopenradioss: Cell Phone Drop Testdragonflydb: 200 - 5:1dragonflydb: 200 - 1:1dragonflydb: 200 - 1:5dragonflydb: 50 - 5:1dragonflydb: 50 - 1:1dragonflydb: 50 - 1:5blender: BMW27 - CPU-Onlyxmrig: Wownero - 1Mbuild-erlang: Time To Compileopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUtensorflow: CPU - 64 - GoogLeNetffmpeg: libx265 - Liveffmpeg: libx265 - Liveaom-av1: Speed 0 Two-Pass - Bosphorus 4Kspacy: en_core_web_trfspacy: en_core_web_lgtensorflow: CPU - 16 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamaom-av1: Speed 6 Two-Pass - Bosphorus 4Kdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamavifenc: 2deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streambuild-php: Time To Compiledeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamjpegxl-decode: 1deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamaom-av1: Speed 4 Two-Pass - Bosphorus 1080pdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streambuild-wasmer: Time To Compilestress-ng: Context Switchingcpuminer-opt: Deepcoincpuminer-opt: scrypttensorflow: CPU - 32 - GoogLeNetstress-ng: System V Message Passingstress-ng: Memory Copyingstress-ng: Matrix Mathstress-ng: Forkingstress-ng: Cryptostress-ng: MEMFDstress-ng: Mallocstress-ng: IO_uringstress-ng: Atomicstress-ng: NUMAstress-ng: Futexstress-ng: MMAPstress-ng: CPU Cachestress-ng: Glibc Qsort Data Sortingstress-ng: Glibc C String Functionsstress-ng: Socket Activitystress-ng: Vector Mathstress-ng: Semaphoresstress-ng: CPU Stressstress-ng: SENDFILEstress-ng: Mutexcpuminer-opt: Garlicoincpuminer-opt: Ringcoincpuminer-opt: LBC, LBRY Creditsjpegxl-decode: Allcpuminer-opt: Blake-2 Scpuminer-opt: Magiffmpeg: libx264 - Liveffmpeg: libx264 - Livecpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Skeincoincpuminer-opt: Myriad-Groestlcpuminer-opt: x25xencodec: 24 kbpstensorflow: CPU - 64 - AlexNetencodec: 6 kbpsencodec: 3 kbpssrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMencodec: 1.5 kbpsstream: Copyy-cruncher: 1Bonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUsrsran: OFDM_Testsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMaom-av1: Speed 0 Two-Pass - Bosphorus 1080pquadray: 5 - 4Kminibude: OpenMP - BM1minibude: OpenMP - BM1natron: Spaceshipquadray: 1 - 4Kquadray: 3 - 4Kquadray: 2 - 4Kquadray: 5 - 1080pquadray: 3 - 1080pquadray: 2 - 1080pquadray: 1 - 1080pcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 1080ptensorflow: CPU - 32 - AlexNettensorflow: CPU - 16 - GoogLeNetonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUbuild-python: Defaulttensorflow: CPU - 16 - AlexNetsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 4Kblosc: blosclz bitshuffleencode-flac: WAV To FLACy-cruncher: 500Msrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsmhasher: FarmHash128smhasher: FarmHash128srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMonednn: IP Shapes 3D - f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 4Konednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUsmhasher: MeowHash x86_64 AES-NIsmhasher: MeowHash x86_64 AES-NIaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 1080psmhasher: Spooky32smhasher: Spooky32blosc: blosclz shuffleavifenc: 6, Losslessonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUsmhasher: FarmHash32 x86_64 AVXsmhasher: FarmHash32 x86_64 AVXsmhasher: fasthash32smhasher: fasthash32smhasher: t1ha2_atoncesmhasher: t1ha2_atoncesmhasher: t1ha0_aes_avx2 x86_64smhasher: t1ha0_aes_avx2 x86_64aom-av1: Speed 8 Realtime - Bosphorus 1080punpack-linux: linux-5.19.tar.xzaom-av1: Speed 9 Realtime - Bosphorus 1080psmhasher: wyhashsmhasher: wyhashaom-av1: Speed 10 Realtime - Bosphorus 1080pavifenc: 6avifenc: 10, Losslessonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUstream: Addstream: Triadstream: Scaleab2217.7849205.7334730823539.8635.91121.89648712000002499.008161.66337.94314.6380.97121.771.05211.89178.5542325.203143193.5218.13139.275300113183.35744.70169.4745.01168.3039.48169.1122.22113.65357.680.016613220.1526562431.723580540.3727050.0697223151.1294430441.05824360.0175910218.36927220.35228380.1486741780.06477991540.9741024.34969.28109.3369.14109.55271130921.1583.2512.8573.69212.8642.3081.4289.676100.3411.9312.0239.35125963.01135867.53137835.76137650.8585.4984.035348.978.3511.7812605.110.0811.961464.711468.841461.42749.498749.601751.24269.06259.88231.7166.534624845.974831108.144968213.384692530.644846900.635133428.5266.0715466.965.6461015.765.871014.925.9559.9110.69287.3820.8455.16108.684.81249.340.2547791.894.331384.258.35717.90.3533642.445.591071.835.532168.46123.2114.7244.020.3515141899638.17632.7569.4344630.94819.456782.990572.28217.620.069949.816114.30788.748113.918.778541.36565.498591.574417.56756.90539.36841.8197143.409368.4310.444395.709263.41294.586919.1913.538773.817926.8652223.14117.2448137.927535.7047269058.5815910515.94124.8812824745.146023.06105410.5285644.0934722.621052.5424051732.0724405.52203812.22604.963968210.34288.78140.39285.643786314.5821588.47107491.732647043.1644344.82386040.0610556564.453641.883286.46109350179.931343070863.91299.4216.86592226132255023114020917050470885.2228.898289.6525.18924.989239.3626.324.33460212.621.8895.198994.049570.547255222900000227.6603.11.011.4440.2871007.1665.519.814.835.525.7118.5220.9576.2313652515432635.9644.2227.01123.222.201610.9730410.52183513.037158.36251650.90.2939890.5530970.17997551.8811242.211.73210.28122.9200.664.58215702.24239.2601.33.285768.391.521870.33730757.82342374.869.6666.8636.37115384.0420713.16.2245.757645.383051.842435.60729550.6230.0576548.4927.27314943.527.63275924.15119.524.64148.9819.60823038150.633.7693.5353.116161.853870.78963244030.544032.439775.52222.2378209.586372.6581430429039.82635.69121.98651183000002530.422155.7338.61312.0350.93121.891.03211.89176.0859125.805769192.0818.08139.66182.60945.21167.540474245.25167.4039.53169.5722.27113.360639293357.70.018567070.1536528561.668599600.36727240.0687367860.9955023340.95724420.0175802518.42427140.35727990.1437001010.06577234941.0541026.34168.83110.0569.43109.1020.7423.2333.043.75112.5822.3891.46610.135100.5612.4812.7439.31122285.31133787.04136170.71136318.8485.0884.756348.4977.8312.413238.910.1412.671462.281461.511468.12748.408749.088749.215273.74273.31252.2066.454630859.674763803.935055548.284640660.414791381.145031752.1866.1915523.464.4541020.985.841019.725.85558.6410.72287.5620.8355.87107.324.811244.720.2547792.774.341382.68.25726.250.3533761.45.591071.585.532168.92123.4114.9743.930.3615181900938.16632.20999.4433631.76039.453383.643771.704517.8320.390649.033114.00398.7713113.93588.776542.3865.508491.522617.535557.009139.47341.7556143.607573.3710.536194.874163.327494.696319.2313.54273.802426.8227223.53287.2408138.008635.9177822736.216300509.75124.8312852176.646055.08100434.7588150.5432274.881052.3624222411.8627013199497.81570.873648065290.15145.06283.153784061.0919398.62107282.732653440.3644430.82387020.5411885212.763537.113291.46109450198.291029150852.17298.9716.8933882023562020944051020889.9128.712289.3625.25125.064249.1668.524.2860281.921.8815.195993.917330.54655225000000231.2611.31.011.5540.2961007.415.4204.845.625.9618.5521.3176.5713629115455937.1944.39227.31123.222.20270.9894820.49682513.14158.05251643.90.2938530.5534350.17987554.6311489.511.7910.225137.5200.864.65815674.6242.4598.73.2785172.651.57970.36935158.31641856.3673.5178.136.57715317.9721142.26.3155.751165.386061.8377634.02931036.7230.0986508.8527.23714998.9727.51176015.27124.054.63148.5719.92422824.89153.133.7363.5433.115821.854220.76313743942.144086.539742.5OpenBenchmarking.org

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Timeab50010001500200025002217.782222.241. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh Timeab50100150200250205.73209.591. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: motorBike - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: motorBike - Execution Timeb0.59811.19621.79432.39242.99052.658141. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance Metricab70K140K210K280K350K3082353042901. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50ba91827364539.8239.80

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-Onlyba140280420560700635.69635.91

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetba306090120150121.98121.89

nekRS

Input: TurboPipe Periodic

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodicba14000M28000M42000M56000M70000M65118300000648712000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

SMHasher

Hash: SHA3-256

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: SHA3-256ab50010001500200025002499.012530.421. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: SHA3-256

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256ab4080120160200161.66155.701. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

OpenRadioss

Model: INIVOL and Fluid Structure Interaction Drop Container

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Containerab70140210280350337.94338.61

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compileba70140210280350312.04314.64

JPEG XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100ab0.21830.43660.65490.87321.09150.970.931. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetba306090120150121.89121.77

JPEG XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100ab0.23630.47260.70890.94521.18151.051.031. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-Onlyab50100150200250211.89211.89

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timeba4080120160200176.09178.551. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timeab61218243025.2025.811. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenRadioss

Model: Bird Strike on Windshield

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldba4080120160200192.08193.52

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Uploadab4812162018.1318.081. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Uploadab306090120150139.28139.661. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Timed CPython Compilation

Build Configuration: Released Build, PGO + LTO Optimized

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimizedba4080120160200182.61183.36

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Platformba102030405045.2144.701. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Platformba4080120160200167.54169.471. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On Demandba102030405045.2545.011. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On Demandba4080120160200167.40168.301. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50ba91827364539.5339.48

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-Onlyab4080120160200169.11169.57

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Uploadba51015202522.2722.221. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Uploadba306090120150113.36113.651. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetba80160240320400357.70357.68

PostgreSQL

Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latencyab0.00410.00820.01230.01640.02050.0160.0181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 1 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Onlyab13K26K39K52K65K61322567071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latencyab0.03440.06880.10320.13760.1720.1520.1531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 100 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Onlyab140K280K420K560K700K6562436528561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latencyba0.38770.77541.16311.55081.93851.6681.7231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 100 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Writeba13K26K39K52K65K59960580541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latencyba0.08330.16660.24990.33320.41650.3670.3701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 1 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Writeba6001200180024003000272427051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latencyba0.01550.0310.04650.0620.07750.0680.0691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Onlyba160K320K480K640K800K7367867223151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latencyba0.2540.5080.7621.0161.270.9951.1291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Writeba11K22K33K44K55K50233443041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latencyba91827364540.9641.061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 100 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Writeba5001000150020002500244224361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latencyab0.00380.00760.01140.01520.0190.0170.0171. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 1 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Onlyab13K26K39K52K65K59102580251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latencyab51015202518.3718.421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Writeab6001200180024003000272227141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latencyab0.08030.16060.24090.32120.40150.3520.3571. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 1 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Writeab6001200180024003000283827991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latencyba0.03330.06660.09990.13320.16650.1430.1481. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 100 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Onlyba150K300K450K600K750K7001016741781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latencyab0.01460.02920.04380.05840.0730.0640.0651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Onlyab200K400K600K800K1000K7799157723491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2ba91827364541.0540.971. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM2

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2ba20040060080010001026.341024.351. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On Demandab153045607569.2868.831. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Video On Demand

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On Demandab20406080100109.33110.051. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Platformba153045607569.4369.141. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Platform

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Platformba20406080100109.10109.551. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3ba51015202520.7421.16MIN: 20.46 / MAX: 22.29MIN: 20.91 / MAX: 27.751. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ba0.73151.4632.19452.9263.65753.2333.251MIN: 3.19 / MAX: 3.48MIN: 3.21 / MAX: 3.451. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ab0.6841.3682.0522.7363.422.8573.040MIN: 2.82 / MAX: 3.32MIN: 3.01 / MAX: 3.951. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0ab0.8441.6882.5323.3764.223.6923.751MIN: 3.63 / MAX: 6.2MIN: 3.69 / MAX: 6.321. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50ba369121512.5812.86MIN: 12.5 / MAX: 17.47MIN: 12.77 / MAX: 13.91. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1ab0.53751.0751.61252.152.68752.3082.389MIN: 2.28 / MAX: 8.5MIN: 2.35 / MAX: 2.651. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ab0.32990.65980.98971.31961.64951.4281.466MIN: 1.41 / MAX: 1.8MIN: 1.45 / MAX: 1.91. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetab36912159.67610.135MIN: 9.56 / MAX: 25.58MIN: 9.9 / MAX: 59.361. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenRadioss

Model: Bumper Beam

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beamab20406080100100.34100.56

JPEG XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80ba369121512.4811.931. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80ba369121512.7412.021. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50ab91827364539.3539.31

nginx

Connections: 1000

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000ab30K60K90K120K150K125963.01122285.311. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500ab30K60K90K120K150K135867.53133787.041. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 200

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200ab30K60K90K120K150K137835.76136170.711. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

nginx

Connections: 100

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100ab30K60K90K120K150K137650.85136318.841. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-Onlyba2040608010085.0885.49

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0ab2040608010084.0484.761. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetab80160240320400348.90348.49

OpenRadioss

Model: Rubber O-Ring Seal Installation

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationba2040608010077.8378.35

JPEG XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90ba369121512.4011.781. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1Mba3K6K9K12K15K13238.912605.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kba369121510.1410.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90ba369121512.6711.961. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUba300600900120015001462.281464.71MIN: 1457.55MIN: 1458.511. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUba300600900120015001461.511468.84MIN: 1456.01MIN: 1463.431. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUab300600900120015001461.421468.12MIN: 1455.91MIN: 1463.071. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUba160320480640800748.41749.50MIN: 743.53MIN: 744.261. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUba160320480640800749.09749.60MIN: 744.22MIN: 744.631. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUba160320480640800749.22751.24MIN: 744.44MIN: 746.291. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

ClickHouse

100M Rows Web Analytics Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third Runba60120180240300273.74269.06MIN: 19.14 / MAX: 30000MIN: 18.14 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second Runba60120180240300273.31259.88MIN: 18.51 / MAX: 20000MIN: 15.29 / MAX: 100001. ClickHouse server version 22.5.4.19 (official build).

ClickHouse

100M Rows Web Analytics Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold Cacheba60120180240300252.20231.71MIN: 15.78 / MAX: 30000MIN: 16.43 / MAX: 8571.431. ClickHouse server version 22.5.4.19 (official build).

OpenRadioss

Model: Cell Phone Drop Test

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testba153045607566.4566.53

Dragonflydb

Clients: 200 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1ba1000K2000K3000K4000K5000K4630859.674624845.971. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1ab1000K2000K3000K4000K5000K4831108.144763803.931. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 200 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5ba1.1M2.2M3.3M4.4M5.5M5055548.284968213.381. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 5:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1ab1000K2000K3000K4000K5000K4692530.644640660.411. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1ab1000K2000K3000K4000K5000K4846900.634791381.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5ab1.1M2.2M3.3M4.4M5.5M5133428.525031752.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-Onlyab153045607566.0766.19

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1Mba3K6K9K12K15K15523.415466.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Erlang/OTP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To Compileba153045607564.4565.65

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUab20040060080010001015.761020.98MIN: 925.99 / MAX: 1191.5MIN: 907.77 / MAX: 1188.531. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUab1.32082.64163.96245.28326.6045.875.841. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUab20040060080010001014.921019.72MIN: 862.19 / MAX: 1179.73MIN: 606.77 / MAX: 1204.461. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUab1.32752.6553.98255.316.63755.905.851. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUba120240360480600558.64559.91MIN: 531.73 / MAX: 584.86MIN: 532.56 / MAX: 595.371. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUba369121510.7210.691. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUab60120180240300287.38287.56MIN: 273.25 / MAX: 343.02MIN: 272.86 / MAX: 322.171. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUab51015202520.8420.831. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUab132639526555.1655.87MIN: 45.9 / MAX: 75.35MIN: 43.51 / MAX: 79.91. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUab20406080100108.68107.321. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUab1.08232.16463.24694.32925.41154.804.81MIN: 3.6 / MAX: 10.09MIN: 3.75 / MAX: 12.951. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUab300600900120015001249.341244.721. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUab0.05630.11260.16890.22520.28150.250.25MIN: 0.16 / MAX: 10.57MIN: 0.15 / MAX: 7.461. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUba10K20K30K40K50K47792.7747791.891. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUab0.97651.9532.92953.9064.88254.334.34MIN: 3.47 / MAX: 11.88MIN: 3.47 / MAX: 121. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUab300600900120015001384.251382.601. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUba2468108.258.35MIN: 4.84 / MAX: 17.53MIN: 6.05 / MAX: 14.711. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUba160320480640800726.25717.901. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUab0.07880.15760.23640.31520.3940.350.35MIN: 0.22 / MAX: 7.64MIN: 0.22 / MAX: 7.621. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUba7K14K21K28K35K33761.4033642.441. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUab1.25782.51563.77345.03126.2895.595.59MIN: 3.05 / MAX: 12.81MIN: 2.83 / MAX: 10.651. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUab20040060080010001071.831071.581. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUab1.24432.48863.73294.97726.22155.535.53MIN: 2.94 / MAX: 12.44MIN: 2.88 / MAX: 12.681. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUba50010001500200025002168.922168.461. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -pie -ldl

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetba306090120150123.4123.2

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Liveba306090120150114.97114.721. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Liveba102030405043.9344.021. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kba0.0810.1620.2430.3240.4050.360.351. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

spaCy

Model: en_core_web_trf

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfba3006009001200150015181514

spaCy

Model: en_core_web_lg

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgba4K8K12K16K20K1900918996

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50ab91827364538.1738.16

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamba140280420560700632.21632.76

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamba36912159.44339.4344

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab140280420560700630.95631.76

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab36912159.45679.4533

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab2040608010082.9983.64

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab163248648072.2871.70

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kba4812162017.8317.601. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamab51015202520.0720.39

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamab112233445549.8249.03

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamba306090120150114.00114.31

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamba2468108.77138.7480

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamab306090120150113.91113.94

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamab2468108.77858.7765

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2ab102030405041.3742.381. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab153045607565.5065.51

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab2040608010091.5791.52

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamba4812162017.5417.57

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamba132639526557.0156.91

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compileab91827364539.3739.47

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamba102030405041.7641.82

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamba306090120150143.61143.41

JPEG XL Decoding libjxl

CPU Threads: 1

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1ba163248648073.3768.43

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamab369121510.4410.54

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamab2040608010095.7194.87

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Streamba142842567063.3363.41

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Streamba2040608010094.7094.59

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pba51015202519.2319.191. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Streamab369121513.5413.54

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Streamab163248648073.8273.80

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamba61218243026.8226.87

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamba50100150200250223.53223.14

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamba2468107.24087.2448

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamba306090120150138.01137.93

Timed Wasmer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compileab81624324035.7035.921. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Stress-NG

Test: Context Switching

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context Switchingba2M4M6M8M10M7822736.207269058.581. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Algorithm: Deepcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoinba3K6K9K12K15K16300159101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: scrypt

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptab110220330440550515.94509.751. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetab306090120150124.88124.83

Stress-NG

Test: System V Message Passing

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message Passingba3M6M9M12M15M12852176.6412824745.141. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Memory Copying

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory Copyingba130026003900520065006055.086023.061. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Matrix Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix Mathab20K40K60K80K100K105410.52100434.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Forking

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Forkingba20K40K60K80K100K88150.5485644.091. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Crypto

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Cryptoab7K14K21K28K35K34722.6232274.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: MEMFD

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDab20040060080010001052.541052.361. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Malloc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Mallocba5M10M15M20M25M24222411.8624051732.071. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: IO_uring

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringba6K12K18K24K30K27013.0024405.521. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Atomic

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Atomicab40K80K120K160K200K203812.22199497.811. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: NUMA

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAab130260390520650604.96570.871. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Futex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Futexab800K1600K2400K3200K4000K3968210.343648065.001. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: MMAP

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPba60120180240300290.15288.781. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: CPU Cache

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU Cacheba306090120150145.06140.391. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Glibc Qsort Data Sorting

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data Sortingab60120180240300285.64283.151. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Glibc C String Functions

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String Functionsab800K1600K2400K3200K4000K3786314.583784061.091. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Socket Activity

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket Activityab5K10K15K20K25K21588.4719398.621. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector Mathab20K40K60K80K100K107491.73107282.731. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Semaphores

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Semaphoresba600K1200K1800K2400K3000K2653440.362647043.161. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: CPU Stress

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU Stressba10K20K30K40K50K44430.8244344.821. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: SENDFILE

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEba80K160K240K320K400K387020.54386040.061. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Stress-NG

Test: Mutex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Mutexba3M6M9M12M15M11885212.7610556564.451. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Algorithm: Garlicoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoinab80016002400320040003641.883537.111. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Ringcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoinba70014002100280035003291.463286.461. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: LBC, LBRY Credits

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Creditsba20K40K60K80K100K1094501093501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

JPEG XL Decoding libjxl

CPU Threads: All

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: Allba4080120160200198.29179.93

Cpuminer-Opt

Algorithm: Blake-2 S

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 Sab300K600K900K1200K1500K134307010291501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Magi

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magiab2004006008001000863.91852.171. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Liveab70140210280350299.42298.971. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx264 - Scenario: Live

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Liveab4812162016.8716.891. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Cpuminer-Opt

Algorithm: Triple SHA-256, Onecoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoinba70K140K210K280K350K3388203225501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Quad SHA-256, Pyrite

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyriteba50K100K150K200K250K2356202311401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Skeincoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoinba40K80K120K160K200K2094402091701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Myriad-Groestl

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestlba11K22K33K44K55K51020504701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: x25x

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xba2004006008001000889.91885.221. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

EnCodec

Target Bandwidth: 24 kbps

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsba71421283528.7128.90

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetab60120180240300289.65289.36

EnCodec

Target Bandwidth: 6 kbps

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsab61218243025.1925.25

EnCodec

Target Bandwidth: 3 kbps

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsab61218243024.9925.06

srsRAN

Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMba50100150200250249.1239.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMba140280420560700668.5626.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

EnCodec

Target Bandwidth: 1.5 kbps

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsba61218243024.2824.33

Stream

Type: Copy

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: Copyba13K26K39K52K65K60281.960212.61. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1Bba51015202521.8821.89

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUba1.16982.33963.50944.67925.8495.195995.19899MIN: 5.07MIN: 5.051. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUba0.91121.82242.73363.64484.5563.917334.04957MIN: 3.25MIN: 3.231. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUba0.12310.24620.36930.49240.61550.5465500.547255MIN: 0.53MIN: 0.531. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

srsRAN

Test: OFDM_Test

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_Testba50M100M150M200M250M2250000002229000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMba50100150200250231.2227.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMba130260390520650611.3603.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pba0.22730.45460.68190.90921.13651.011.011. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

QuadRay

Scene: 5 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4Kba0.34880.69761.04641.39521.7441.551.441. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1ba91827364540.3040.291. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

miniBUDE

Implementation: OpenMP - Input Deck: BM1

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1ba20040060080010001007.411007.171. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Natron

Input: Spaceship

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: Spaceshipab1.23752.4753.71254.956.18755.55.4

QuadRay

Scene: 1 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4Kba51015202520.0019.811. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 3 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4Kba1.0892.1783.2674.3565.4454.844.831. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 2 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4Kba1.26452.5293.79355.0586.32255.625.521. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 5 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pba1.3412.6824.0235.3646.7055.965.711. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 3 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pba51015202518.5518.521. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 2 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pba51015202521.3120.951. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 1 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pba2040608010076.5776.231. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Ratingab30K60K90K120K150K1365251362911. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Ratingba30K60K90K120K150K1545591543261. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kba91827364537.1935.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pba102030405044.3944.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetba50100150200250227.31227.01

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetba306090120150123.22123.22

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUab0.49560.99121.48681.98242.4782.201612.20270MIN: 2.04MIN: 2.041. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUab0.22260.44520.66780.89041.1130.9730410.989482MIN: 0.93MIN: 0.941. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUba0.11740.23480.35220.46960.5870.4968250.521835MIN: 0.48MIN: 0.481. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

Timed CPython Compilation

Build Configuration: Default

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Defaultab369121513.0413.14

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetab306090120150158.36158.05

srsRAN

Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMba501001502002502512511. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMab140280420560700650.9643.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUba0.06610.13220.19830.26440.33050.2938530.293989MIN: 0.28MIN: 0.281. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUab0.12450.2490.37350.4980.62250.5530970.553435MIN: 0.54MIN: 0.541. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUba0.04050.0810.12150.1620.20250.1798750.179975MIN: 0.17MIN: 0.171. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kba122436486054.6351.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

C-Blosc

Test: blosclz bitshuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleba2K4K6K8K10K11489.511242.21. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACab369121511.7311.791. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500Mba369121510.2310.28

srsRAN

Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMba306090120150137.5122.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

srsRAN

Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMba4080120160200200.8200.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

SMHasher

Hash: FarmHash128

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash128ab142842567064.5864.661. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: FarmHash128

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128ab3K6K9K12K15K15702.2415674.601. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

srsRAN

Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMba50100150200250242.4239.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

srsRAN

Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMab130260390520650601.3598.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUba0.73931.47862.21792.95723.69653.278513.28570MIN: 3.23MIN: 3.231. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kba163248648072.6568.391. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUab0.35540.71081.06621.42161.7771.521871.57970MIN: 1.45MIN: 1.481. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUab0.08310.16620.24930.33240.41550.3373070.369351MIN: 0.32MIN: 0.341. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

SMHasher

Hash: MeowHash x86_64 AES-NI

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIab132639526557.8258.321. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: MeowHash x86_64 AES-NI

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIab9K18K27K36K45K42374.8041856.361. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kba163248648073.5169.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pba2040608010078.1066.861. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

Hash: Spooky32

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: Spooky32ab81624324036.3736.581. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: Spooky32

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32ab3K6K9K12K15K15384.0415317.971. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

C-Blosc

Test: blosclz shuffle

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleba5K10K15K20K25K21142.220713.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Losslessab2468106.2246.3151. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUba1.29552.5913.88655.1826.47755.751165.75764MIN: 5.63MIN: 5.671. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUab1.21192.42383.63574.84766.05955.383055.38606MIN: 5.3MIN: 5.311. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUba0.41450.8291.24351.6582.07251.837761.84240MIN: 1.79MIN: 1.81. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

SMHasher

Hash: FarmHash32 x86_64 AVX

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXba81624324034.0335.611. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: FarmHash32 x86_64 AVX

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXba7K14K21K28K35K31036.7229550.621. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: fasthash32

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: fasthash32ab71421283530.0630.101. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: fasthash32

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32ab140028004200560070006548.496508.851. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: t1ha2_atonce

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceba61218243027.2427.271. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: t1ha2_atonce

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceba3K6K9K12K15K14998.9714943.501. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: t1ha0_aes_avx2 x86_64

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64ba71421283527.5127.631. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: t1ha0_aes_avx2 x86_64

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64ba16K32K48K64K80K76015.2775924.151. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pba306090120150124.05119.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Unpacking The Linux Kernel

linux-5.19.tar.xz

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzba1.0442.0883.1324.1765.224.634.64

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pab306090120150148.98148.571. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

Hash: wyhash

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: wyhashab51015202519.6119.921. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

SMHasher

Hash: wyhash

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashab5K10K15K20K25K23038.0022824.891. (CXX) g++ options: -march=native -O3 -flto=auto -fno-fat-lto-objects

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pba306090120150153.13150.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6ba0.8481.6962.5443.3924.243.7363.7691. (CXX) g++ options: -O3 -fPIC -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Losslessab0.79721.59442.39163.18883.9863.5353.5431. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUba0.70111.40222.10332.80443.50553.115823.11616MIN: 3.02MIN: 3.011. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUab0.41720.83441.25161.66882.0861.853871.85422MIN: 1.78MIN: 1.781. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUba0.17770.35540.53310.71080.88850.7631370.789632MIN: 0.73MIN: 0.761. (CXX) g++ options: -O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -fopenmp -msse4.1 -fPIC -pie -ldl

Stream

Type: Add

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: Addab9K18K27K36K45K44030.543942.11. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp

Stream

Type: Triad

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: Triadba9K18K27K36K45K44086.544032.41. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp

Stream

Type: Scale

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: Scaleab9K18K27K36K45K39775.539742.51. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp


Phoronix Test Suite v10.8.5