Raptor New Tests

Intel Core i5-13600K testing with a ASUS PRIME Z790-P WIFI (0602 BIOS) and AMD Radeon RX 6800 XT 16GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2210274-PTS-RAPTORNE00&grr.

Raptor New TestsProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolutioni9-13900K12Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)Realtek ALC897ASUS VP28URealtek RTL8125 2.5GbE + Intel Device 7a70Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160Intel Core i5-13600K @ 3.80GHz (14 Cores / 20 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- i9-13900K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x10e- 1: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10e- 2: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10ePython Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Raptor New Testsopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timetensorflow: CPU - 256 - ResNet-50webp2: Quality 100, Lossless Compressiontensorflow: CPU - 64 - VGG-16blender: Barbershop - CPU-Onlybrl-cad: VGR Performance Metrictensorflow: CPU - 512 - GoogLeNetbuild-linux-kernel: allmodconfigopenradioss: INIVOL and Fluid Structure Interaction Drop Containertensorflow: CPU - 32 - VGG-16build-nodejs: Time To Compiletensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 64 - ResNet-50jpegxl: JPEG - 100jpegxl: PNG - 100blender: Pabellon Barcelona - CPU-Onlyopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenradioss: Bird Strike on Windshieldtensorflow: CPU - 16 - VGG-16blender: Classroom - CPU-Onlywebp2: Quality 95, Compression Effort 7build-python: Released Build, PGO + LTO Optimizedopenradioss: Rubber O-Ring Seal Installationtensorflow: CPU - 256 - AlexNetopenradioss: Bumper Beamtensorflow: CPU - 32 - ResNet-50xmrig: Monero - 1Mjpegxl: JPEG - 80blender: Fishy Cat - CPU-Onlyjpegxl: PNG - 80openfoam: motorBike - Execution Timeopenfoam: motorBike - Mesh Timeopenradioss: Cell Phone Drop Testwebp2: Quality 75, Compression Effort 7avifenc: 0tensorflow: CPU - 64 - GoogLeNetxmrig: Wownero - 1Mjpegxl: JPEG - 90jpegxl: PNG - 90aom-av1: Speed 4 Two-Pass - Bosphorus 4Kblender: BMW27 - CPU-Onlybuild-godot: Time To Compileopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUbuild-erlang: Time To Compileopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUtensorflow: CPU - 16 - ResNet-50openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUaom-av1: Speed 0 Two-Pass - Bosphorus 4Kopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUrocksdb: Update Randrocksdb: Read Rand Write Randrocksdb: Read While Writingrocksdb: Rand Readdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streambuild-linux-kernel: defconfigspacy: en_core_web_trfspacy: en_core_web_lgtensorflow: CPU - 32 - GoogLeNetdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamavifenc: 2deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamtensorflow: CPU - 64 - AlexNetbuild-php: Time To Compiledeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamaom-av1: Speed 6 Two-Pass - Bosphorus 4Kdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamjpegxl-decode: 1deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streambuild-wasmer: Time To Compiledeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamaom-av1: Speed 4 Two-Pass - Bosphorus 1080pcpuminer-opt: Ringcoincpuminer-opt: Garlicoincpuminer-opt: Magicpuminer-opt: LBC, LBRY Creditscpuminer-opt: Deepcoincpuminer-opt: Blake-2 Scpuminer-opt: Skeincoincpuminer-opt: scryptcpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Myriad-Groestlcpuminer-opt: x25xcpuminer-opt: Triple SHA-256, Onecoiny-cruncher: 1Bwebp: Quality 100, Lossless, Highest Compressionbuild-mesa: Time To Compiletensorflow: CPU - 32 - AlexNettensorflow: CPU - 16 - GoogLeNetcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingaom-av1: Speed 0 Two-Pass - Bosphorus 1080pnatron: Spaceshipquadray: 5 - 4Kquadray: 3 - 4Kquadray: 2 - 4Kquadray: 1 - 4Kquadray: 5 - 1080pquadray: 2 - 1080pquadray: 3 - 1080pquadray: 1 - 1080pjpegxl-decode: Alltensorflow: CPU - 16 - AlexNetaom-av1: Speed 6 Realtime - Bosphorus 4Kbuild-python: Defaultaom-av1: Speed 6 Two-Pass - Bosphorus 1080py-cruncher: 500Mencode-flac: WAV To FLACwebp: Quality 100, Losslessaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 1080pavifenc: 6, Losslesswebp: Quality 100, Highest Compressionavifenc: 6aom-av1: Speed 8 Realtime - Bosphorus 1080pavifenc: 10, Losslessaom-av1: Speed 9 Realtime - Bosphorus 1080pwebp2: Quality 100, Compression Effort 5aom-av1: Speed 10 Realtime - Bosphorus 1080pwebp2: Defaultwebp: Quality 100webp: Defaulti9-13900K121730.6117180.6802537.890.0411.56541.77494464113429.312294.9511.3244.813111.7263.5537.771.041.07167.16148.0286726.815401173.6211.15138.520.18171.581102.59257.16108.5438.659570.913.1769.8313.5360.888724.929864.010.3768.74111.7917298.91313.3112.147.5150.0152081.793.792066.933.8158.2521464.25.4540.3409.6919.45116.7868.440.4310.19783.9348.16497.618.11984.9821380.3813.871728.490.6835164.891.5315612.1782010838821295840427199102549918.801813.0457922.020412.9533222.605453.754240.009256520381113.2238.8149.852888.970711.239290.746111.019333.62524.661440.544235.7534.917109.4439109.501821.6631.151632.094870.15149.887679.548712.418480.504273.765162.594717.730956.380230.83910.269397.342626.025770.953616.161156.085799020040826580172790354.26210420196601230.7346628022.3920.9321.088206.51117.81467081861651.276.90.852.873.5111.763.3613.0611.1343.7497.77162.0845.8512.31867.2710.3811.3762.3465.2586.5986.476.455.3614.913.259138.433.363159.469.98162.5715.4015.9325.372123.7961232.6219625.710.038.06882.1229521082.82697.843488.947.66377.03779.66185.0125.070.950.97270.6222.5425233.652291257.537.26223.130.11193.316171.09171.28154.9824.527591.611.57113.2511.8866.570327.2224103.450.2398.91175.5811028.611.3711.689.3478.4775.8432039.012.392036.942.4269.9011540.833.1824.95386.112.84111.3344.870.39.36533.6547.6293.537.96627.4115.09331.1212.521111.590.6621075.281.59307.9261068227322583438785114483821809.37148.5433815.29188.485193.044436.171359.72415821867871.94190.497736.7302122.43268.1675123.95338.067447.41533.459729.8842153.6244.9796.810172.148117.6435.295528.327365.85129.137954.074417.002558.804164.8112107.943322.156545.122439.1612.249481.612321.63413.942877.97670.563380011850493520100240209.4411668020820736.4624667029.5320.8431.192139.6471.76877451273590.894.70.471.641.986.771.927.656.3926.14348.52119.2143.6314.6854.5213.82312.5632.1268.159293.8584.767.2754.484.821158.513.992190.036.05194.511.3514.8022.792155.1145240.4011825.740.038.07886.1929470982.71697.206484.357.66379.9579.71184.8925.050.950.96270.68223.412333.754873252.627.23224.020.11193.858171.03170.95153.0124.527363.311.55113.2311.8566.914827.0333104.760.2399.63675.6210993.411.3811.699.3678.2875.7382037.932.392045.222.3970.3031543.423.1824.95386.0312.84111.2744.870.39.35533.7847.37294.947.96627.6415.07331.5212.441118.230.6620970.661.59322.4460312927246103501623114072362808.51818.5585809.48728.5316192.536536.291159.79815801883172.06191.576236.1568122.36318.1722122.92368.13547.6133.746229.6304154.245.21596.701872.297917.6535.255828.359265.57128.854454.295217.12358.391464.7088107.794921.811745.835639.10312.270281.474321.613403.162867.48667.223380011850516620100270208.8811654021150737.9124668029.3980.8330.693139.571.43874721275680.914.80.471.641.986.791.927.666.4126.24346.72119.074414.69354.6713.69612.5382.1168.0291.9193.7399.017.2854.494.806155.113.991187.656.05192.6811.4514.7922.79OpenBenchmarking.org

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Timei9-13900K1250010001500200025001730.612123.802155.111. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh Timei9-13900K1250100150200250180.68232.62240.401. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50i9-13900K1291827364537.8925.7125.74

WebP2 Image Encode

Encode Settings: Quality 100, Lossless Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compressioni9-13900K120.0090.0180.0270.0360.0450.040.030.031. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

TensorFlow

Device: CPU - Batch Size: 64 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: VGG-16i9-13900K12369121511.568.068.07

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-Onlyi9-13900K122004006008001000541.77882.12886.19

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance Metrici9-13900K12110K220K330K440K550K4944642952102947091. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNeti9-13900K12306090120150113.0082.8282.71

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigi9-13900K12150300450600750429.31697.84697.21

OpenRadioss

Model: INIVOL and Fluid Structure Interaction Drop Container

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Containeri9-13900K12110220330440550294.95488.94484.35

TensorFlow

Device: CPU - Batch Size: 32 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: VGG-16i9-13900K12369121511.307.667.66

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compilei9-13900K1280160240320400244.81377.04379.95

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNeti9-13900K12306090120150111.7079.6679.71

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNeti9-13900K1260120180240300263.55185.01184.89

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50i9-13900K1291827364537.7725.0725.05

JPEG XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100i9-13900K120.2340.4680.7020.9361.171.040.950.951. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100i9-13900K120.24080.48160.72240.96321.2041.070.970.961. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-Onlyi9-13900K1260120180240300167.16270.60270.68

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timei9-13900K1250100150200250148.03222.54223.411. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timei9-13900K1281624324026.8233.6533.751. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenRadioss

Model: Bird Strike on Windshield

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldi9-13900K1260120180240300173.62257.53252.62

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: VGG-16i9-13900K12369121511.157.267.23

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-Onlyi9-13900K1250100150200250138.52223.13224.02

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7i9-13900K120.04050.0810.12150.1620.20250.180.110.111. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

Timed CPython Compilation

Build Configuration: Released Build, PGO + LTO Optimized

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimizedi9-13900K124080120160200171.58193.32193.86

OpenRadioss

Model: Rubber O-Ring Seal Installation

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationi9-13900K124080120160200102.59171.09171.03

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNeti9-13900K1260120180240300257.16171.28170.95

OpenRadioss

Model: Bumper Beam

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beami9-13900K12306090120150108.54154.98153.01

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50i9-13900K1291827364538.6524.5224.52

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1Mi9-13900K122K4K6K8K10K9570.97591.67363.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

JPEG XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80i9-13900K12369121513.1711.5711.551. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-Onlyi9-13900K1230609012015069.83113.25113.23

JPEG XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80i9-13900K12369121513.5311.8811.851. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenFOAM

Input: motorBike - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: motorBike - Execution Timei9-13900K12153045607560.8966.5766.911. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: motorBike - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: motorBike - Mesh Timei9-13900K1261218243024.9327.2227.031. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenRadioss

Model: Cell Phone Drop Test

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testi9-13900K122040608010064.01103.45104.76

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7i9-13900K120.08330.16660.24990.33320.41650.370.230.231. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0i9-13900K122040608010068.7498.9199.641. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNeti9-13900K12306090120150111.7975.5875.62

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1Mi9-13900K124K8K12K16K20K17298.911028.610993.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

JPEG XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90i9-13900K12369121513.0011.3711.381. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90i9-13900K12369121513.3111.6811.691. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Ki9-13900K12369121512.109.349.361. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-Onlyi9-13900K122040608010047.5178.4778.28

Timed Godot Game Engine Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compilei9-13900K122040608010050.0275.8475.74

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUi9-13900K124008001200160020002081.792039.012037.93MIN: 1720.91 / MAX: 2619.01MIN: 1728.42 / MAX: 2654MIN: 1721.32 / MAX: 2656.111. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUi9-13900K120.85281.70562.55843.41124.2643.792.392.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUi9-13900K124008001200160020002066.932036.942045.22MIN: 1688.42 / MAX: 2582.5MIN: 1725.41 / MAX: 2662.16MIN: 1759.13 / MAX: 2658.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUi9-13900K120.85731.71462.57193.42924.28653.812.422.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Timed Erlang/OTP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To Compilei9-13900K12163248648058.2569.9070.30

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUi9-13900K12300600900120015001464.201540.831543.42MIN: 1395.26 / MAX: 1540.35MIN: 1492.04 / MAX: 1627.41MIN: 1479.1 / MAX: 1633.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUi9-13900K121.22632.45263.67894.90526.13155.453.183.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50i9-13900K1291827364540.3024.9524.95

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUi9-13900K1290180270360450409.69386.10386.03MIN: 269.91 / MAX: 725.2MIN: 290.17 / MAX: 766.32MIN: 290.04 / MAX: 762.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUi9-13900K1251015202519.4512.8412.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUi9-13900K12306090120150116.78111.33111.27MIN: 91.61 / MAX: 157.44MIN: 90.88 / MAX: 159.47MIN: 90.49 / MAX: 160.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUi9-13900K12153045607568.4444.8744.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Ki9-13900K120.09680.19360.29040.38720.4840.430.300.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUi9-13900K12369121510.199.369.35MIN: 7.63 / MAX: 18.81MIN: 7.88 / MAX: 14.96MIN: 7.87 / MAX: 15.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUi9-13900K122004006008001000783.93533.65533.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUi9-13900K12112233445548.1647.6047.37MIN: 37.48 / MAX: 63.3MIN: 24.24 / MAX: 64.51MIN: 24.26 / MAX: 64.61. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUi9-13900K12110220330440550497.61293.53294.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUi9-13900K122468108.117.967.96MIN: 5.93 / MAX: 19.17MIN: 6.36 / MAX: 15.53MIN: 6.36 / MAX: 15.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUi9-13900K122004006008001000984.98627.41627.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUi9-13900K1251015202521.0015.0915.07MIN: 11.86 / MAX: 36.95MIN: 11.84 / MAX: 25.69MIN: 11.77 / MAX: 25.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUi9-13900K1280160240320400380.38331.12331.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUi9-13900K124812162013.8712.5212.44MIN: 6.82 / MAX: 27.53MIN: 8.61 / MAX: 29.44MIN: 8.58 / MAX: 29.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUi9-13900K124008001200160020001728.491111.591118.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUi9-13900K120.1530.3060.4590.6120.7650.680.660.66MIN: 0.44 / MAX: 6MIN: 0.51 / MAX: 3.47MIN: 0.49 / MAX: 2.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUi9-13900K128K16K24K32K40K35164.8921075.2820970.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUi9-13900K120.34430.68861.03291.37721.72151.531.501.50MIN: 1.13 / MAX: 3.32MIN: 1.15 / MAX: 2.9MIN: 1.17 / MAX: 2.71. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUi9-13900K123K6K9K12K15K15612.179307.929322.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update Randomi9-13900K12200K400K600K800K1000K8201086106826031291. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Facebook RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write Randomi9-13900K12800K1600K2400K3200K4000K3882129273225827246101. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Facebook RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While Writingi9-13900K121.3M2.6M3.9M5.2M6.5M5840427343878535016231. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Facebook RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Readi9-13900K1240M80M120M160M200M1991025491144838211140723621. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streami9-13900K122004006008001000918.80809.37808.52

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streami9-13900K12369121513.04578.54338.5585

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streami9-13900K122004006008001000922.02815.29809.49

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streami9-13900K12369121512.95338.48508.5316

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streami9-13900K1250100150200250222.61193.04192.54

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streami9-13900K12122436486053.7536.1736.29

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigi9-13900K12132639526540.0159.7259.80

spaCy

Model: en_core_web_trf

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfi9-13900K126001200180024003000256515821580

spaCy

Model: en_core_web_lg

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgi9-13900K124K8K12K16K20K203811867818831

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNeti9-13900K12306090120150113.2071.9472.06

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streami9-13900K1250100150200250238.81190.50191.58

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streami9-13900K12112233445549.8536.7336.16

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streami9-13900K1230609012015088.97122.43122.36

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streami9-13900K12369121511.23928.16758.1722

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streami9-13900K1230609012015090.75123.95122.92

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streami9-13900K12369121511.01938.06748.1350

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2i9-13900K12112233445533.6347.4247.611. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streami9-13900K1281624324024.6633.4633.75

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streami9-13900K1291827364540.5429.8829.63

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNeti9-13900K1250100150200250235.75153.62154.20

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compilei9-13900K12102030405034.9244.9745.22

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streami9-13900K1220406080100109.4496.8196.70

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streami9-13900K1220406080100109.5072.1572.30

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Ki9-13900K1251015202521.6617.6417.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streami9-13900K1281624324031.1535.3035.26

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streami9-13900K1271421283532.0928.3328.36

JPEG XL Decoding libjxl

CPU Threads: 1

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1i9-13900K12163248648070.1565.8565.57

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Streami9-13900K12306090120150149.89129.14128.85

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Streami9-13900K122040608010079.5554.0754.30

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streami9-13900K124812162012.4217.0017.12

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streami9-13900K122040608010080.5058.8058.39

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streami9-13900K12163248648073.7764.8164.71

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streami9-13900K124080120160200162.59107.94107.79

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Streami9-13900K1251015202517.7322.1621.81

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Streami9-13900K12132639526556.3845.1245.84

Timed Wasmer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compilei9-13900K1291827364530.8439.1639.101. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streami9-13900K12369121510.2712.2512.27

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streami9-13900K122040608010097.3481.6181.47

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pi9-13900K1261218243026.0221.6021.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Cpuminer-Opt

Algorithm: Ringcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoini9-13900K12120024003600480060005770.953413.943403.161. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Garlicoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoini9-13900K1280016002400320040003616.162877.972867.481. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Magi

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magii9-13900K1220040060080010001156.08670.56667.221. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: LBC, LBRY Credits

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Creditsi9-13900K1212K24K36K48K60K5799033800338001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Deepcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoini9-13900K124K8K12K16K20K2004011850118501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Blake-2 S

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 Si9-13900K12200K400K600K800K1000K8265804935205166201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Skeincoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoini9-13900K1240K80K120K160K200K1727901002401002701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: scrypt

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scrypti9-13900K1280160240320400354.26209.44208.881. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Quad SHA-256, Pyrite

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyritei9-13900K1250K100K150K200K250K2104201166801165401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Myriad-Groestl

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestli9-13900K125K10K15K20K25K1966020820211501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: x25x

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xi9-13900K12300600900120015001230.73736.46737.911. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Triple SHA-256, Onecoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoini9-13900K12100K200K300K400K500K4662802466702466801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1Bi9-13900K1271421283522.3929.5329.40

WebP Image Encode

Encode Settings: Quality 100, Lossless, Highest Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest Compressioni9-13900K120.20930.41860.62790.83721.04650.930.840.831. (CC) gcc options: -fvisibility=hidden -O2 -lm

Timed Mesa Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compilei9-13900K1271421283521.0931.1930.69

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNeti9-13900K1250100150200250206.51139.64139.50

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNeti9-13900K12306090120150117.8071.7671.43

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Ratingi9-13900K1230K60K90K120K150K14670887745874721. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Ratingi9-13900K1240K80K120K160K200K1861651273591275681. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pi9-13900K120.28580.57160.85741.14321.4291.270.890.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Natron

Input: Spaceship

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: Spaceshipi9-13900K122468106.94.74.8

QuadRay

Scene: 5 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4Ki9-13900K120.19130.38260.57390.76520.95650.850.470.471. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 3 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4Ki9-13900K120.64581.29161.93742.58323.2292.871.641.641. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 2 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4Ki9-13900K120.78981.57962.36943.15923.9493.511.981.981. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 1 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4Ki9-13900K12369121511.766.776.791. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 5 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pi9-13900K120.7561.5122.2683.0243.783.361.921.921. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 2 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pi9-13900K12369121513.067.657.661. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 3 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pi9-13900K12369121511.136.396.411. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 1 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pi9-13900K12102030405043.7026.1426.241. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

JPEG XL Decoding libjxl

CPU Threads: All

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: Alli9-13900K12110220330440550497.77348.52346.72

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNeti9-13900K124080120160200162.08119.21119.07

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Ki9-13900K12102030405045.8543.6344.001. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed CPython Compilation

Build Configuration: Default

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Defaulti9-13900K124812162012.3214.6814.69

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pi9-13900K12153045607567.2754.5254.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500Mi9-13900K124812162010.3813.8213.70

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACi9-13900K12369121511.3812.5612.541. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

WebP Image Encode

Encode Settings: Quality 100, Lossless

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Losslessi9-13900K120.52651.0531.57952.1062.63252.342.122.111. (CC) gcc options: -fvisibility=hidden -O2 -lm

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Ki9-13900K12153045607565.2568.1568.021. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Ki9-13900K122040608010086.5992.0091.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Ki9-13900K122040608010086.4093.8593.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pi9-13900K122040608010076.4584.7699.011. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Losslessi9-13900K122468105.3617.2757.2851. (CXX) g++ options: -O3 -fPIC -lm

WebP Image Encode

Encode Settings: Quality 100, Highest Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest Compressioni9-13900K121.10482.20963.31444.41925.5244.914.484.491. (CC) gcc options: -fvisibility=hidden -O2 -lm

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6i9-13900K121.08472.16943.25414.33885.42353.2594.8214.8061. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pi9-13900K124080120160200138.43158.51155.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Losslessi9-13900K120.89821.79642.69463.59284.4913.3633.9923.9911. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pi9-13900K124080120160200159.46190.03187.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5i9-13900K1236912159.986.056.051. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pi9-13900K124080120160200162.57194.50192.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaulti9-13900K124812162015.4011.3511.451. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

WebP Image Encode

Encode Settings: Quality 100

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100i9-13900K124812162015.9314.8014.791. (CC) gcc options: -fvisibility=hidden -O2 -lm

WebP Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Defaulti9-13900K1261218243025.3722.7922.791. (CC) gcc options: -fvisibility=hidden -O2 -lm


Phoronix Test Suite v10.8.4