Intel Core i5 12400, Core i5 13400 Linux benchmarks new cpus

Benchmarks for a future article on Phoronix.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2301154-PTS-EO2022BE79
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 4 Tests
CPU Massive 7 Tests
Creator Workloads 13 Tests
Cryptography 2 Tests
Database Test Suite 2 Tests
Encoding 5 Tests
Game Development 2 Tests
HPC - High Performance Computing 8 Tests
Imaging 3 Tests
Machine Learning 5 Tests
Multi-Core 10 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 2 Tests
Python Tests 6 Tests
Renderers 2 Tests
Server 3 Tests
Server CPU Tests 4 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i5 12400
January 12 2023
  5 Hours, 15 Minutes
12400
January 13 2023
  5 Hours, 13 Minutes
13400
January 14 2023
  11 Hours, 43 Minutes
Core i5 13400
January 15 2023
  4 Hours, 54 Minutes
Invert Hiding All Results Option
  6 Hours, 46 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Intel Core i5 12400, Core i5 13400 Linux benchmarks new cpusProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionCore i5 124001240013400Core i5 13400Intel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB2000GB Samsung SSD 980 PRO 2TBAMD Radeon RX 6700 XT 12GB (2855/1000MHz)Realtek ALC897ASUS VP28UIntel Device 7a70Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160Intel Core i5-13400 @ 3.40GHz (10 Cores / 16 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Core i5 12400: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x25 - Thermald 2.4.9- 12400: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x25 - Thermald 2.4.9- 13400: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x25- Core i5 13400: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x25Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Core i5 124001240013400Core i5 13400Result OverviewPhoronix Test Suite100%116%132%149%165%QuadRayStargate Digital Audio WorkstationminiBUDEBRL-CADOpenRadiossnekRSBlenderTimed Linux Kernel Compilationuvg266libavif avifencKvazaarOpenVKLspaCyEnCodecSVT-AV1ClickHouseXmrigCockroachDBJPEG XL Decoding libjxlNeural Magic DeepSparseJPEG XL libjxlTensorFlowY-CruncherOpenVINO

Intel Core i5 12400, Core i5 13400 Linux benchmarks new cpusopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUquadray: 5 - 4Kquadray: 5 - 1080popenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUquadray: 2 - 4Kquadray: 3 - 1080pquadray: 1 - 4Kquadray: 2 - 1080pquadray: 3 - 4Kquadray: 1 - 1080pdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamspacy: en_core_web_trfopenradioss: Rubber O-Ring Seal Installationopenradioss: Cell Phone Drop Teststargate: 480000 - 1024stargate: 44100 - 1024stargate: 96000 - 1024stargate: 192000 - 1024stargate: 44100 - 512stargate: 480000 - 512stargate: 192000 - 512stargate: 96000 - 512minibude: OpenMP - BM1minibude: OpenMP - BM1brl-cad: VGR Performance Metricopenradioss: Bumper Beamopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUuvg266: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Mediumuvg266: Bosphorus 1080p - Mediumavifenc: 6nekrs: TurboPipe Periodicavifenc: 6, Losslessuvg266: Bosphorus 1080p - Ultra Fastopenvino: Person Detection FP32 - CPUkvazaar: Bosphorus 1080p - Ultra Fastopenvino: Person Detection FP16 - CPUblender: BMW27 - CPU-Onlyuvg266: Bosphorus 1080p - Super Fastopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUavifenc: 0build-linux-kernel: allmodconfigblender: Pabellon Barcelona - CPU-Onlyblender: Classroom - CPU-Onlyuvg266: Bosphorus 1080p - Very Fastbuild-linux-kernel: defconfigblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyuvg266: Bosphorus 4K - Very Fastavifenc: 2uvg266: Bosphorus 4K - Ultra Fastuvg266: Bosphorus 4K - Super Fastkvazaar: Bosphorus 1080p - Super Fastopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUkvazaar: Bosphorus 4K - Super Fastkvazaar: Bosphorus 4K - Slowkvazaar: Bosphorus 4K - Mediumkvazaar: Bosphorus 4K - Ultra Fastkvazaar: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Slowuvg266: Bosphorus 4K - Mediumopenvkl: vklBenchmark Scalaropenradioss: INIVOL and Fluid Structure Interaction Drop Containerkvazaar: Bosphorus 1080p - Very Fastopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvkl: vklBenchmark ISPCsvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 1080pcockroach: KV, 95% Reads - 512openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUavifenc: 10, Losslesscockroach: KV, 95% Reads - 256cockroach: KV, 95% Reads - 128cockroach: KV, 95% Reads - 1024encodec: 6 kbpsopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUencodec: 24 kbpsclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachecockroach: KV, 50% Reads - 1024clickhouse: 100M Rows Hits Dataset, Second Runcockroach: KV, 60% Reads - 1024encodec: 1.5 kbpsopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenradioss: Bird Strike on Windshieldencodec: 3 kbpsopenvino: Weld Porosity Detection FP16 - CPUsvt-av1: Preset 13 - Bosphorus 4Kcockroach: KV, 10% Reads - 1024xmrig: Wownero - 1Mopenvino: Weld Porosity Detection FP16-INT8 - CPUsvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080ptensorflow: CPU - 64 - GoogLeNetcockroach: KV, 60% Reads - 512clickhouse: 100M Rows Hits Dataset, Third Runtensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 32 - GoogLeNetxmrig: Monero - 1Mjpegxl-decode: Allsvt-av1: Preset 12 - Bosphorus 1080pcockroach: KV, 50% Reads - 512deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamcockroach: KV, 10% Reads - 512jpegxl: JPEG - 90deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamjpegxl: JPEG - 80deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamjpegxl: PNG - 90deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamsvt-av1: Preset 13 - Bosphorus 1080pjpegxl: PNG - 80deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamtensorflow: CPU - 64 - AlexNetdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamopenvino: Age Gender Recognition Retail 0013 FP16 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamtensorflow: CPU - 32 - AlexNetopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUtensorflow: CPU - 16 - AlexNetjpegxl: PNG - 100spacy: en_core_web_lgy-cruncher: 500Mjpegxl-decode: 1deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamjpegxl: JPEG - 100cockroach: KV, 50% Reads - 128cockroach: KV, 10% Reads - 256cockroach: KV, 60% Reads - 128cockroach: KV, 10% Reads - 128y-cruncher: 1Btensorflow: CPU - 32 - ResNet-50cockroach: KV, 50% Reads - 256tensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 16 - ResNet-50cockroach: MoVR - 1024cockroach: MoVR - 128deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamcockroach: MoVR - 512cockroach: KV, 60% Reads - 256cockroach: MoVR - 256smhasher: wyhashCore i5 124001240013400Core i5 1340028.917.60.552.180.940.862.176.97.458.211.727.92111.749277.4178119.7297497.1534497.200238.083760.29661535195.38112.092.6417622.7280481.9426771.2826592.5900082.5118931.1681831.816299255.43510.217149598178.371.92098.2922.4133.1234.4725.149.1275692680000013.16682.413026.39127.533016.23154.1668.61.321.32180.8961597.045536.66445.1264.17122.039217.841728.214.4482.83418.9315.4994.9325.49156.7522.226.937.1330.717.794.485.0675644.8375.4519.45205.5214631.6112.40587.11156455.512.55318.635.84157642.859000.655187.627.19510.25390.0231.179180.3841950.8200.5944079.626.047503.597.93333.3927.083207.49123.33836820.56231.7788.77117.2967.89949.9443959.3202.9047.9949.294585.3269.98453.39241073.549.746230654.18.6878.752438.70658.876.033625.05258.935.7776173.0775497.7129.146.0338172.72645.789422.153845.1328109.5726.83986329.9243.844822.806102.46963.1790.310.841590719.59355.7442.364123.60110.8311841.813343.214590.16631.242.6619.2823659.619.7118.7125.8125.314.721967.906934.759828.7617125.429557.2124.928.77.580.552.180.940.862.186.727.418.211.727.65112.128376.3733121.5126498.3122501.939438.173761.43671557195.67111.922.6503672.721681.9454861.2846952.5895812.5034361.1786181.811417255.53410.221149551178.131.92094.0622.433.234.4325.139.055653830000013.10182.383010.56127.363012.86154.368.51.331.33180.6521592.63535.47444.964.15121.358216.931726.2914.4782.42418.9215.595.0325.57156.3122.266.947.1230.6617.844.495.0675644.875.6319.43205.7914731.5532.40886.71456109.512.65316.135.7475747059140.655093.526.20610.25390.0329.901185.3941862.4194.5244239.525.013503.117.95333.4326.143209.02123.73436894.86201.5790.35116.9837.91549.9743981196.9148.0949.174590271.97447.69140974.948.823130683.98.9278.563639.25099.095.966624.6859.085.7122175.0597497.2179.256.0197174.37735.734622.823343.8089109.3526.74896317.9745.600821.9279102.846960.1489.960.841588519.64556.8842.816623.35160.8311716.513478.114628.7662242.76619.3223631.119.7218.73124.8125.314.681668.093534.762728.7578124.929540.2124.955.114.530.311.231.661.511.314.214.575.161.0817.98173.5036118.5130184.5428764.8044765.895157.802891.47911040292.59165.133.8689273.9834382.8216041.8585523.7119773.5796661.6426012.545532185.2937.412205078242.522.521583.0729.5543.3745.0232.936.9704354920000010.120107.572330.87165.292328.1119.2088.331.71.7140.7151243.638418.18348.7781.6395.932171.621359.518.2965.72323.9419.53119.2732.08124.6227.938.678.938.3122.215.586.393796.7592.7116249.8317537.9982.873103.92967125.810.63774.90668318.569633.965255.631.0368.68460.2234.691212.3749206.7228.2751715.529.127429.899.26387.7830.305181.19141.41142546.17122.9684.89134.9469.09857.5150477.0225.5554.7456.005168.4304.22502.96145770.254.512934012.09.5986.357242.07609.756.477926.93029.716.1961161.3888537.8869.886.5068161.46426.193221.416046.6875117.4328.66975970.9343.057223.2232108.316606.3394.050.881645920.45557.9944.036622.70510.8612043.813657.014829.26690.843.36818.9223946.219.4718.94124.4124.214.597768.484235.054828.5207124.429512.3124.355.6114.340.311.231.651.511.314.214.555.161.0817.95173.1329116.5342185.0887765.8308761.826957.509191.49591034291.76164.843.858653.9734042.8142641.8500663.6753623.5639791.6202422.524847185.8487.434204733239.472.511584.9129.3743.6745.1332.867.0114347820000010.078107.52341.39164.92334.05119.1588.261.71.7140.4741245.071419.6348.5481.7296.778171.281365.2118.2565.42523.7719.44119.6432124.9327.748.678.8838.1822.115.576.2993785.4492.5616.04249.2117637.9342.889103.77466905.610.62376.184.98368289.169900.265146.230.8228.72458.3835.278209.3449201.1222.1551692.829.318432.139.17388.730.425179.69142.941425517156.9694.55134.2529.01257.550413.5224.4854.7255.955164.3304.23500.42645816.554.575134253.89.6186.878642.7459.776.529526.81429.726.1829161.7337533.9419.876.4929161.69316.184421.137147.3035117.1428.70426046.3243.151223.1726107.896605.8694.380.881663220.42957.9143.955822.74660.8612118.21370414961.36770.543.6118.9424114.219.4318.97124.6124.814.605368.449935.043428.53124.629664.7124.9OpenBenchmarking.org

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU12400Core i5 1240013400Core i5 13400122436486028.7028.9155.1055.61MIN: 28.15 / MAX: 35.06MIN: 28.13 / MAX: 34.69MIN: 43.43 / MAX: 57.02MIN: 41.77 / MAX: 68.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU12400Core i5 12400Core i5 1340013400481216207.587.6014.3414.53MIN: 7.47 / MAX: 33.92MIN: 7.46 / MAX: 12.65MIN: 11.11 / MAX: 15.47MIN: 8.16 / MAX: 18.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4K12400Core i5 12400Core i5 13400134000.12380.24760.37140.49520.619SE +/- 0.00, N = 30.550.550.310.311. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4K12400Core i5 12400Core i5 1340013400246810Min: 0.31 / Avg: 0.31 / Max: 0.311. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080p12400Core i5 12400Core i5 13400134000.49050.9811.47151.9622.4525SE +/- 0.00, N = 32.182.181.231.231. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080p12400Core i5 12400Core i5 1340013400246810Min: 1.23 / Avg: 1.23 / Max: 1.231. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUCore i5 1240012400Core i5 13400134000.37350.7471.12051.4941.86750.940.941.651.66MIN: 0.9 / MAX: 6.47MIN: 0.91 / MAX: 6.48MIN: 1.27 / MAX: 2.22MIN: 0.97 / MAX: 2.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCore i5 124001240013400Core i5 134000.33980.67961.01941.35921.6990.860.861.511.51MIN: 0.84 / MAX: 6.54MIN: 0.83 / MAX: 6.58MIN: 1.14 / MAX: 2.14MIN: 1.11 / MAX: 1.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4K12400Core i5 12400Core i5 13400134000.49050.9811.47151.9622.4525SE +/- 0.00, N = 32.182.171.311.311. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4K12400Core i5 12400Core i5 1340013400246810Min: 1.31 / Avg: 1.31 / Max: 1.311. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pCore i5 1240012400Core i5 1340013400246810SE +/- 0.00, N = 36.906.724.214.211. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pCore i5 1240012400Core i5 13400134003691215Min: 4.21 / Avg: 4.21 / Max: 4.211. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KCore i5 124001240013400Core i5 13400246810SE +/- 0.00, N = 37.457.414.574.551. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KCore i5 124001240013400Core i5 134003691215Min: 4.56 / Avg: 4.57 / Max: 4.571. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080p12400Core i5 12400Core i5 1340013400246810SE +/- 0.00, N = 38.218.215.165.161. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080p12400Core i5 12400Core i5 13400134003691215Min: 5.16 / Avg: 5.16 / Max: 5.161. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4K12400Core i5 12400Core i5 13400134000.38250.7651.14751.531.9125SE +/- 0.00, N = 31.701.701.081.081. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4K12400Core i5 12400Core i5 1340013400246810Min: 1.08 / Avg: 1.08 / Max: 1.081. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pCore i5 124001240013400Core i5 13400714212835SE +/- 0.03, N = 327.9227.6517.9817.951. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pCore i5 124001240013400Core i5 13400612182430Min: 17.93 / Avg: 17.98 / Max: 18.021. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamCore i5 1240012400Core i5 13400134004080120160200SE +/- 0.65, N = 3111.75112.13173.13173.50
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamCore i5 1240012400Core i5 1340013400306090120150Min: 172.74 / Avg: 173.5 / Max: 174.8

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12400Core i5 12400Core i5 1340013400306090120150SE +/- 0.50, N = 376.3777.42116.53118.51
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12400Core i5 12400Core i5 134001340020406080100Min: 117.56 / Avg: 118.51 / Max: 119.22

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamCore i5 124001240013400Core i5 134004080120160200SE +/- 0.37, N = 3119.73121.51184.54185.09
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamCore i5 124001240013400Core i5 13400306090120150Min: 183.86 / Avg: 184.54 / Max: 185.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamCore i5 124001240013400Core i5 13400170340510680850SE +/- 2.15, N = 3497.15498.31764.80765.83
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamCore i5 124001240013400Core i5 13400130260390520650Min: 760.88 / Avg: 764.8 / Max: 768.27

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamCore i5 1240012400Core i5 1340013400170340510680850SE +/- 1.42, N = 3497.20501.94761.83765.90
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamCore i5 1240012400Core i5 1340013400130260390520650Min: 763.06 / Avg: 765.9 / Max: 767.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamCore i5 1240012400Core i5 13400134001326395265SE +/- 0.22, N = 338.0838.1757.5157.80
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamCore i5 1240012400Core i5 13400134001122334455Min: 57.48 / Avg: 57.8 / Max: 58.22

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamCore i5 124001240013400Core i5 1340020406080100SE +/- 0.08, N = 360.3061.4491.4891.50
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamCore i5 124001240013400Core i5 1340020406080100Min: 91.32 / Avg: 91.48 / Max: 91.61

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trf12400Core i5 1240013400Core i5 1340030060090012001500SE +/- 2.60, N = 31557153510401034
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trf12400Core i5 1240013400Core i5 1340030060090012001500Min: 1036 / Avg: 1040.33 / Max: 1045

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationCore i5 1240012400Core i5 134001340060120180240300SE +/- 0.41, N = 3195.38195.67291.76292.59
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationCore i5 1240012400Core i5 134001340050100150200250Min: 291.79 / Avg: 292.59 / Max: 293.15

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Test12400Core i5 12400Core i5 13400134004080120160200SE +/- 1.12, N = 3111.92112.09164.84165.13
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Test12400Core i5 12400Core i5 1340013400306090120150Min: 162.92 / Avg: 165.13 / Max: 166.54

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 102413400Core i5 1340012400Core i5 124000.87051.7412.61153.4824.3525SE +/- 0.003281, N = 33.8689273.8586502.6503672.6417621. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 102413400Core i5 1340012400Core i5 12400246810Min: 3.87 / Avg: 3.87 / Max: 3.881. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 102413400Core i5 13400Core i5 12400124000.89631.79262.68893.58524.4815SE +/- 0.002717, N = 33.9834383.9734042.7280482.7216801. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 102413400Core i5 13400Core i5 1240012400246810Min: 3.98 / Avg: 3.98 / Max: 3.991. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 102413400Core i5 1340012400Core i5 124000.63491.26981.90472.53963.1745SE +/- 0.000675, N = 32.8216042.8142641.9454861.9426771. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 102413400Core i5 1340012400Core i5 12400246810Min: 2.82 / Avg: 2.82 / Max: 2.821. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 102413400Core i5 1340012400Core i5 124000.41820.83641.25461.67282.091SE +/- 0.000775, N = 31.8585521.8500661.2846951.2826591. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 102413400Core i5 1340012400Core i5 12400246810Min: 1.86 / Avg: 1.86 / Max: 1.861. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 51213400Core i5 13400Core i5 12400124000.83521.67042.50563.34084.176SE +/- 0.000973, N = 33.7119773.6753622.5900082.5895811. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 51213400Core i5 13400Core i5 1240012400246810Min: 3.71 / Avg: 3.71 / Max: 3.711. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 51213400Core i5 13400Core i5 12400124000.80541.61082.41623.22164.027SE +/- 0.001049, N = 33.5796663.5639792.5118932.5034361. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 51213400Core i5 13400Core i5 1240012400246810Min: 3.58 / Avg: 3.58 / Max: 3.581. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 51213400Core i5 1340012400Core i5 124000.36960.73921.10881.47841.848SE +/- 0.000398, N = 31.6426011.6202421.1786181.1681831. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 51213400Core i5 1340012400Core i5 12400246810Min: 1.64 / Avg: 1.64 / Max: 1.641. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 51213400Core i5 13400Core i5 12400124000.57271.14541.71812.29082.8635SE +/- 0.001647, N = 32.5455322.5248471.8162991.8114171. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 51213400Core i5 13400Core i5 1240012400246810Min: 2.54 / Avg: 2.55 / Max: 2.551. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM112400Core i5 12400Core i5 134001340060120180240300SE +/- 0.67, N = 3255.53255.44185.85185.291. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM112400Core i5 12400Core i5 134001340050100150200250Min: 183.96 / Avg: 185.29 / Max: 186.031. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM112400Core i5 12400Core i5 13400134003691215SE +/- 0.027, N = 310.22110.2177.4347.4121. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM112400Core i5 12400Core i5 13400134003691215Min: 7.36 / Avg: 7.41 / Max: 7.441. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.34VGR Performance Metric13400Core i5 13400Core i5 124001240040K80K120K160K200K2050782047331495981495511. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam12400Core i5 12400Core i5 134001340050100150200250SE +/- 2.21, N = 3178.13178.37239.47242.52
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam12400Core i5 12400Core i5 13400134004080120160200Min: 238.51 / Avg: 242.52 / Max: 246.15

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU13400Core i5 1340012400Core i5 124000.5671.1341.7012.2682.8352.522.511.901.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU13400Core i5 1340012400Core i5 1240050010001500200025001583.071584.912094.062098.29MIN: 1477.42 / MAX: 1887.76MIN: 1427.31 / MAX: 1889.25MIN: 2054 / MAX: 2144.08MIN: 2052.06 / MAX: 2136.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slow13400Core i5 13400Core i5 124001240071421283529.5529.3722.4122.40

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowCore i5 134001340012400Core i5 12400102030405043.6743.3733.2033.121. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumCore i5 1340013400Core i5 1240012400102030405045.1345.0234.4734.431. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Medium13400Core i5 13400Core i5 124001240081624324032.9332.8625.1425.13

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 613400Core i5 1340012400Core i5 124003691215SE +/- 0.012, N = 36.9707.0119.0509.1271. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 613400Core i5 1340012400Core i5 124003691215Min: 6.95 / Avg: 6.97 / Max: 6.991. (CXX) g++ options: -O3 -fPIC -lm

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicCore i5 124001240013400Core i5 1340012000M24000M36000M48000M60000MSE +/- 42366535.54, N = 3569268000005653830000043549200000434782000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicCore i5 124001240013400Core i5 1340010000M20000M30000M40000M50000MMin: 43504800000 / Avg: 43549200000 / Max: 436339000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessCore i5 134001340012400Core i5 124003691215SE +/- 0.02, N = 310.0810.1213.1013.171. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessCore i5 134001340012400Core i5 1240048121620Min: 10.09 / Avg: 10.12 / Max: 10.161. (CXX) g++ options: -O3 -fPIC -lm

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fast13400Core i5 13400Core i5 124001240020406080100107.57107.5082.4182.38

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU13400Core i5 1340012400Core i5 1240060012001800240030002330.872341.393010.563026.39MIN: 2130.11 / MAX: 3008.71MIN: 2163.53 / MAX: 3007.58MIN: 2966.05 / MAX: 3058.2MIN: 2963.75 / MAX: 3245.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra Fast13400Core i5 13400Core i5 12400124004080120160200165.29164.90127.53127.361. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU13400Core i5 1340012400Core i5 1240060012001800240030002328.102334.053012.863016.23MIN: 2104.47 / MAX: 2986.35MIN: 2135.53 / MAX: 3065.89MIN: 2985.99 / MAX: 3074.98MIN: 2973.48 / MAX: 3071.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyCore i5 1340013400Core i5 1240012400306090120150SE +/- 0.02, N = 3119.15119.20154.16154.30
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyCore i5 1340013400Core i5 1240012400306090120150Min: 119.16 / Avg: 119.2 / Max: 119.22

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fast13400Core i5 13400Core i5 12400124002040608010088.3388.2668.6068.50

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUCore i5 134001340012400Core i5 124000.38250.7651.14751.531.91251.701.701.331.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUCore i5 134001340012400Core i5 124000.38250.7651.14751.531.91251.701.701.331.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Core i5 134001340012400Core i5 124004080120160200SE +/- 0.36, N = 3140.47140.72180.65180.901. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Core i5 134001340012400Core i5 12400306090120150Min: 140.01 / Avg: 140.72 / Max: 141.221. (CXX) g++ options: -O3 -fPIC -lm

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig13400Core i5 1340012400Core i5 1240030060090012001500SE +/- 0.19, N = 31243.641245.071592.631597.05
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig13400Core i5 1340012400Core i5 1240030060090012001500Min: 1243.28 / Avg: 1243.64 / Max: 1243.93

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-Only13400Core i5 1340012400Core i5 12400120240360480600418.18419.60535.47536.66

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyCore i5 134001340012400Core i5 12400100200300400500SE +/- 0.14, N = 3348.54348.77444.90445.12
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyCore i5 134001340012400Core i5 1240080160240320400Min: 348.51 / Avg: 348.77 / Max: 348.98

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very FastCore i5 1340013400Core i5 12400124002040608010081.7281.6364.1764.15

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig13400Core i5 1340012400Core i5 12400306090120150SE +/- 0.34, N = 395.9396.78121.36122.04
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig13400Core i5 1340012400Core i5 1240020406080100Min: 95.55 / Avg: 95.93 / Max: 96.6

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyCore i5 134001340012400Core i5 1240050100150200250SE +/- 0.17, N = 3171.28171.62216.93217.84
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyCore i5 134001340012400Core i5 124004080120160200Min: 171.32 / Avg: 171.62 / Max: 171.92

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-Only13400Core i5 1340012400Core i5 124004008001200160020001359.501365.211726.291728.20

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fast13400Core i5 1340012400Core i5 1240051015202518.2918.2514.4714.44

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Core i5 134001340012400Core i5 1240020406080100SE +/- 0.04, N = 365.4365.7282.4282.831. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Core i5 134001340012400Core i5 124001632486480Min: 65.66 / Avg: 65.72 / Max: 65.791. (CXX) g++ options: -O3 -fPIC -lm

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast13400Core i5 13400Core i5 124001240061218243023.9423.7718.9318.92

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fast13400Core i5 1340012400Core i5 1240051015202519.5319.4415.5015.49

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastCore i5 134001340012400Core i5 12400306090120150119.64119.2795.0394.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU13400Core i5 1340012400Core i5 1240071421283532.0832.0025.5725.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU13400Core i5 1340012400Core i5 12400306090120150124.62124.93156.31156.75MIN: 111.98 / MAX: 178.7MIN: 111.3 / MAX: 183.58MIN: 149.83 / MAX: 167MIN: 149.92 / MAX: 175.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super Fast13400Core i5 1340012400Core i5 1240071421283527.9327.7422.2622.221. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowCore i5 134001340012400Core i5 124002468108.678.676.946.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Medium13400Core i5 13400Core i5 12400124002468108.908.887.137.121. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra Fast13400Core i5 13400Core i5 124001240091827364538.3138.1830.7030.661. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very Fast13400Core i5 1340012400Core i5 1240051015202522.2122.1117.8417.791. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slow13400Core i5 1340012400Core i5 124001.25552.5113.76655.0226.27755.585.574.494.48

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Medium13400Core i5 1340012400Core i5 124002468106.306.295.065.06

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarCore i5 134001340012400Core i5 1240020406080100SE +/- 0.00, N = 393937575MIN: 9 / MAX: 1779MIN: 9 / MAX: 1775MIN: 7 / MAX: 1385MIN: 7 / MAX: 1382
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarCore i5 134001340012400Core i5 1240020406080100Min: 93 / Avg: 93 / Max: 93

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container12400Core i5 12400Core i5 13400134002004006008001000SE +/- 3.17, N = 3644.80644.83785.44796.75
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container12400Core i5 12400Core i5 1340013400140280420560700Min: 790.62 / Avg: 796.75 / Max: 801.23

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very Fast13400Core i5 1340012400Core i5 124002040608010092.7192.5675.6375.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU13400Core i5 1340012400Core i5 1240051015202516.0016.0419.4319.45MIN: 14.09 / MAX: 25.84MIN: 13.66 / MAX: 25.82MIN: 18.46 / MAX: 26.18MIN: 18.39 / MAX: 25.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU13400Core i5 1340012400Core i5 1240050100150200250249.83249.21205.79205.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCCore i5 134001340012400Core i5 124004080120160200SE +/- 0.58, N = 3176175147146MIN: 19 / MAX: 2712MIN: 18 / MAX: 2713MIN: 15 / MAX: 2299MIN: 15 / MAX: 2301
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCCore i5 134001340012400Core i5 12400306090120150Min: 174 / Avg: 175 / Max: 176

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4K13400Core i5 13400Core i5 1240012400918273645SE +/- 0.10, N = 338.0037.9331.6131.551. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4K13400Core i5 13400Core i5 1240012400816243240Min: 37.89 / Avg: 38 / Max: 38.21. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KCore i5 134001340012400Core i5 124000.651.31.952.63.25SE +/- 0.004, N = 32.8892.8732.4082.4051. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KCore i5 134001340012400Core i5 12400246810Min: 2.87 / Avg: 2.87 / Max: 2.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080p13400Core i5 13400Core i5 124001240020406080100SE +/- 0.36, N = 3103.93103.7787.1186.711. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080p13400Core i5 13400Core i5 124001240020406080100Min: 103.52 / Avg: 103.93 / Max: 104.661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 51213400Core i5 13400Core i5 124001240014K28K42K56K70KSE +/- 241.43, N = 367125.866905.656455.556109.5
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 51213400Core i5 13400Core i5 124001240012K24K36K48K60KMin: 66644.2 / Avg: 67125.77 / Max: 67397.1

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU13400Core i5 13400Core i5 1240012400369121510.6010.6212.5512.65MIN: 9.52 / MAX: 16.08MIN: 9.14 / MAX: 16.53MIN: 12.23 / MAX: 20.14MIN: 12.01 / MAX: 18.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU13400Core i5 13400Core i5 124001240080160240320400377.00376.18318.63316.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless13400Core i5 1340012400Core i5 124001.31422.62843.94265.25686.571SE +/- 0.021, N = 34.9064.9835.7475.8411. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless13400Core i5 1340012400Core i5 12400246810Min: 4.87 / Avg: 4.91 / Max: 4.941. (CXX) g++ options: -O3 -fPIC -lm

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 25613400Core i5 13400Core i5 124001240015K30K45K60K75KSE +/- 126.38, N = 368318.568289.157642.857470.0
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 25613400Core i5 13400Core i5 124001240012K24K36K48K60KMin: 68066.4 / Avg: 68318.47 / Max: 68460.7

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128Core i5 134001340012400Core i5 1240015K30K45K60K75KSE +/- 104.96, N = 369900.269633.959140.659000.6
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128Core i5 134001340012400Core i5 1240012K24K36K48K60KMin: 69452 / Avg: 69633.93 / Max: 69815.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 102413400Core i5 13400Core i5 124001240014K28K42K56K70KSE +/- 72.44, N = 365255.665146.255187.655093.5
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 102413400Core i5 13400Core i5 124001240011K22K33K44K55KMin: 65147.9 / Avg: 65255.63 / Max: 65393.4

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbps12400Core i5 12400Core i5 134001340071421283526.2127.2030.8231.04

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU13400Core i5 13400Core i5 124001240036912158.688.7210.2510.25MIN: 7.54 / MAX: 17.41MIN: 7.54 / MAX: 22.25MIN: 10.08 / MAX: 16.3MIN: 10.11 / MAX: 17.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU13400Core i5 1340012400Core i5 12400100200300400500460.22458.38390.03390.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbps12400Core i5 1240013400Core i5 1340081624324029.9031.1834.6935.28

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cache13400Core i5 1340012400Core i5 1240050100150200250212.37209.34185.39180.38MIN: 9.19 / MAX: 7500MIN: 9.46 / MAX: 7500MIN: 6.57 / MAX: 6666.67MIN: 6.59 / MAX: 6666.67

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 102413400Core i5 13400Core i5 124001240011K22K33K44K55KSE +/- 93.19, N = 349206.749201.141950.841862.4
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 102413400Core i5 13400Core i5 12400124009K18K27K36K45KMin: 49025.3 / Avg: 49206.67 / Max: 49334.5

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Run13400Core i5 13400Core i5 124001240050100150200250228.27222.15200.59199.01MIN: 9.34 / MAX: 8571.43MIN: 9.54 / MAX: 7500MIN: 6.72 / MAX: 7500MIN: 6.76 / MAX: 6666.67

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 102413400Core i5 1340012400Core i5 1240011K22K33K44K55KSE +/- 106.59, N = 351715.551692.844239.544079.6
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 102413400Core i5 1340012400Core i5 124009K18K27K36K45KMin: 51504.3 / Avg: 51715.5 / Max: 51846.2

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbps12400Core i5 1240013400Core i5 1340071421283525.0126.0529.1329.32

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU13400Core i5 1340012400Core i5 12400110220330440550429.89432.13503.11503.59MIN: 339.29 / MAX: 913.71MIN: 360.39 / MAX: 940.27MIN: 498.88 / MAX: 511.99MIN: 492.41 / MAX: 512.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU13400Core i5 1340012400Core i5 1240036912159.269.177.957.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldCore i5 124001240013400Core i5 1340080160240320400SE +/- 1.87, N = 3333.39333.43387.78388.70
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldCore i5 124001240013400Core i5 1340070140210280350Min: 385.66 / Avg: 387.78 / Max: 391.51

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbps12400Core i5 1240013400Core i5 1340071421283526.1427.0830.3130.43

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU12400Core i5 1240013400Core i5 1340050100150200250209.02207.49181.19179.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KCore i5 134001340012400Core i5 12400306090120150SE +/- 0.71, N = 3142.94141.41123.73123.341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KCore i5 134001340012400Core i5 12400306090120150Min: 140.15 / Avg: 141.41 / Max: 142.61. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 1024Core i5 134001340012400Core i5 124009K18K27K36K45KSE +/- 24.23, N = 342551.042546.136894.836820.5
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 1024Core i5 134001340012400Core i5 124007K14K21K28K35KMin: 42509.9 / Avg: 42546.1 / Max: 42592.1

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MCore i5 1340013400Core i5 124001240015003000450060007500SE +/- 15.59, N = 37156.97122.96231.76201.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MCore i5 1340013400Core i5 124001240012002400360048006000Min: 7095.7 / Avg: 7122.87 / Max: 7149.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU12400Core i5 12400Core i5 13400134002004006008001000790.35788.77694.55684.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4K13400Core i5 13400Core i5 1240012400306090120150SE +/- 0.26, N = 3134.95134.25117.30116.981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4K13400Core i5 13400Core i5 1240012400306090120150Min: 134.47 / Avg: 134.95 / Max: 135.351. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080p13400Core i5 1340012400Core i5 124003691215SE +/- 0.055, N = 39.0989.0127.9157.8991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080p13400Core i5 1340012400Core i5 124003691215Min: 9 / Avg: 9.1 / Max: 9.191. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNet13400Core i5 1340012400Core i5 124001326395265SE +/- 0.13, N = 357.5157.5049.9749.94
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNet13400Core i5 1340012400Core i5 124001122334455Min: 57.25 / Avg: 57.51 / Max: 57.67

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 51213400Core i5 1340012400Core i5 1240011K22K33K44K55KSE +/- 42.65, N = 350477.050413.543981.043959.3
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 51213400Core i5 1340012400Core i5 124009K18K27K36K45KMin: 50405 / Avg: 50477.03 / Max: 50552.6

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Run13400Core i5 13400Core i5 124001240050100150200250225.55224.48202.90201.10MIN: 9.32 / MAX: 7500MIN: 9.15 / MAX: 7500MIN: 6.79 / MAX: 8571.43MIN: 6.78 / MAX: 7500

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNet13400Core i5 1340012400Core i5 124001224364860SE +/- 0.11, N = 354.7454.7248.0947.99
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNet13400Core i5 1340012400Core i5 124001122334455Min: 54.53 / Avg: 54.74 / Max: 54.88

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNet13400Core i5 13400Core i5 12400124001326395265SE +/- 0.07, N = 356.0055.9549.2949.17
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNet13400Core i5 13400Core i5 12400124001122334455Min: 55.88 / Avg: 56 / Max: 56.12

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1M13400Core i5 1340012400Core i5 1240011002200330044005500SE +/- 5.44, N = 35168.45164.34590.04585.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1M13400Core i5 1340012400Core i5 124009001800270036004500Min: 5159.4 / Avg: 5168.37 / Max: 5178.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllCore i5 134001340012400Core i5 1240070140210280350SE +/- 0.38, N = 3304.23304.22271.97269.98
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllCore i5 134001340012400Core i5 1240050100150200250Min: 303.46 / Avg: 304.22 / Max: 304.64

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080p13400Core i5 13400Core i5 1240012400110220330440550SE +/- 3.43, N = 3502.96500.43453.39447.691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080p13400Core i5 13400Core i5 124001240090180270360450Min: 496.53 / Avg: 502.96 / Max: 508.261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 512Core i5 1340013400Core i5 124001240010K20K30K40K50KSE +/- 45.17, N = 345816.545770.241073.540974.9
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 512Core i5 1340013400Core i5 12400124008K16K24K32K40KMin: 45682.6 / Avg: 45770.17 / Max: 45833.2

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamCore i5 1340013400Core i5 12400124001224364860SE +/- 0.05, N = 354.5854.5149.7548.82
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamCore i5 1340013400Core i5 12400124001122334455Min: 54.42 / Avg: 54.51 / Max: 54.57

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 512Core i5 134001340012400Core i5 124007K14K21K28K35KSE +/- 56.87, N = 334253.834012.030683.930654.1
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 512Core i5 134001340012400Core i5 124006K12K18K24K30KMin: 33937.2 / Avg: 34012 / Max: 34123.6

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Core i5 134001340012400Core i5 124003691215SE +/- 0.01, N = 39.619.598.928.681. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Core i5 134001340012400Core i5 124003691215Min: 9.58 / Avg: 9.59 / Max: 9.61. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamCore i5 1340013400Core i5 124001240020406080100SE +/- 0.33, N = 386.8886.3678.7578.56
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamCore i5 1340013400Core i5 12400124001632486480Min: 85.72 / Avg: 86.36 / Max: 86.84

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamCore i5 134001340012400Core i5 124001020304050SE +/- 0.17, N = 342.7542.0839.2538.71
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamCore i5 134001340012400Core i5 12400918273645Min: 41.82 / Avg: 42.08 / Max: 42.39

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Core i5 134001340012400Core i5 124003691215SE +/- 0.00, N = 39.779.759.098.871. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Core i5 134001340012400Core i5 124003691215Min: 9.75 / Avg: 9.75 / Max: 9.761. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamCore i5 1340013400Core i5 1240012400246810SE +/- 0.0214, N = 36.52956.47796.03365.9666
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamCore i5 1340013400Core i5 12400124003691215Min: 6.45 / Avg: 6.48 / Max: 6.52

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13400Core i5 13400Core i5 1240012400612182430SE +/- 0.07, N = 326.9326.8125.0524.69
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13400Core i5 13400Core i5 1240012400612182430Min: 26.81 / Avg: 26.93 / Max: 27.04

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Core i5 134001340012400Core i5 124003691215SE +/- 0.01, N = 39.729.719.088.931. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Core i5 134001340012400Core i5 124003691215Min: 9.7 / Avg: 9.71 / Max: 9.721. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400246810SE +/- 0.0097, N = 36.19616.18295.77765.7122
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400246810Min: 6.18 / Avg: 6.2 / Max: 6.21

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 12400124004080120160200SE +/- 0.25, N = 3161.39161.73173.08175.06
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400306090120150Min: 161.04 / Avg: 161.39 / Max: 161.88

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080p13400Core i5 13400Core i5 1240012400120240360480600SE +/- 2.55, N = 3537.89533.94497.71497.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080p13400Core i5 13400Core i5 1240012400100200300400500Min: 534.28 / Avg: 537.89 / Max: 542.811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 8013400Core i5 1340012400Core i5 124003691215SE +/- 0.01, N = 39.889.879.259.141. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 8013400Core i5 1340012400Core i5 124003691215Min: 9.87 / Avg: 9.88 / Max: 9.891. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13400Core i5 13400Core i5 1240012400246810SE +/- 0.0278, N = 36.50686.49296.03386.0197
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13400Core i5 13400Core i5 12400124003691215Min: 6.47 / Avg: 6.51 / Max: 6.56

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 12400124004080120160200SE +/- 0.02, N = 3161.46161.69172.73174.38
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400306090120150Min: 161.44 / Avg: 161.46 / Max: 161.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400246810SE +/- 0.0006, N = 36.19326.18445.78945.7346
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400246810Min: 6.19 / Avg: 6.19 / Max: 6.19

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamCore i5 1340013400Core i5 1240012400510152025SE +/- 0.04, N = 321.1421.4222.1522.82
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamCore i5 1340013400Core i5 1240012400510152025Min: 21.37 / Avg: 21.42 / Max: 21.5

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamCore i5 1340013400Core i5 12400124001122334455SE +/- 0.09, N = 347.3046.6945.1343.81
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamCore i5 1340013400Core i5 12400124001020304050Min: 46.51 / Avg: 46.69 / Max: 46.79

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNet13400Core i5 13400Core i5 1240012400306090120150SE +/- 0.25, N = 3117.43117.14109.57109.35
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNet13400Core i5 13400Core i5 124001240020406080100Min: 116.99 / Avg: 117.43 / Max: 117.87

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamCore i5 1340013400Core i5 1240012400714212835SE +/- 0.10, N = 328.7028.6726.8426.75
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamCore i5 1340013400Core i5 1240012400612182430Min: 28.47 / Avg: 28.67 / Max: 28.77

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUCore i5 1240012400Core i5 1340013400140028004200560070006329.926317.976046.325970.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 12400124001020304050SE +/- 0.05, N = 343.0643.1543.8445.60
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400918273645Min: 42.98 / Avg: 43.06 / Max: 43.14

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400612182430SE +/- 0.02, N = 323.2223.1722.8121.93
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13400Core i5 13400Core i5 1240012400510152025Min: 23.18 / Avg: 23.22 / Max: 23.26

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNet13400Core i5 1340012400Core i5 1240020406080100SE +/- 0.40, N = 3108.31107.89102.84102.40
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNet13400Core i5 1340012400Core i5 1240020406080100Min: 107.81 / Avg: 108.31 / Max: 109.1

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCore i5 124001240013400Core i5 13400150030004500600075006963.176960.146606.336605.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetCore i5 1340013400Core i5 124001240020406080100SE +/- 0.07, N = 394.3894.0590.3189.96
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetCore i5 1340013400Core i5 124001240020406080100Min: 93.94 / Avg: 94.05 / Max: 94.17

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Core i5 134001340012400Core i5 124000.1980.3960.5940.7920.99SE +/- 0.00, N = 30.880.880.840.841. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Core i5 134001340012400Core i5 12400246810Min: 0.87 / Avg: 0.88 / Max: 0.881. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgCore i5 1340013400Core i5 12400124004K8K12K16K20KSE +/- 53.42, N = 316632164591590715885
OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgCore i5 1340013400Core i5 12400124003K6K9K12K15KMin: 16361 / Avg: 16458.67 / Max: 16545

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MCore i5 1240012400Core i5 1340013400510152025SE +/- 0.06, N = 319.5919.6520.4320.46
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MCore i5 1240012400Core i5 1340013400510152025Min: 20.4 / Avg: 20.46 / Max: 20.57

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 113400Core i5 1340012400Core i5 124001326395265SE +/- 0.12, N = 357.9957.9156.8855.74
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 113400Core i5 1340012400Core i5 124001122334455Min: 57.82 / Avg: 57.99 / Max: 58.22

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamCore i5 1240012400Core i5 13400134001020304050SE +/- 0.11, N = 342.3642.8243.9644.04
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamCore i5 1240012400Core i5 1340013400918273645Min: 43.86 / Avg: 44.04 / Max: 44.23

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamCore i5 1240012400Core i5 1340013400612182430SE +/- 0.06, N = 323.6023.3522.7522.71
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamCore i5 1240012400Core i5 1340013400612182430Min: 22.6 / Avg: 22.71 / Max: 22.79

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Core i5 134001340012400Core i5 124000.19350.3870.58050.7740.9675SE +/- 0.00, N = 30.860.860.830.831. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Core i5 134001340012400Core i5 12400246810Min: 0.86 / Avg: 0.86 / Max: 0.871. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128Core i5 1340013400Core i5 12400124003K6K9K12K15KSE +/- 141.12, N = 312118.212043.811841.811716.5
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128Core i5 1340013400Core i5 12400124002K4K6K8K10KMin: 11762.1 / Avg: 12043.83 / Max: 12199.2

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 256Core i5 134001340012400Core i5 124003K6K9K12K15KSE +/- 57.49, N = 313704.013657.013478.113343.2
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 256Core i5 134001340012400Core i5 124002K4K6K8K10KMin: 13574.5 / Avg: 13656.97 / Max: 13767.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 128Core i5 134001340012400Core i5 124003K6K9K12K15KSE +/- 39.43, N = 314961.314829.214628.714590.1
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 128Core i5 134001340012400Core i5 124003K6K9K12K15KMin: 14751.5 / Avg: 14829.17 / Max: 14879.8

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 128Core i5 1340013400Core i5 124001240015003000450060007500SE +/- 28.90, N = 36770.56690.86631.26622.0
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 128Core i5 1340013400Core i5 124001240012002400360048006000Min: 6641.4 / Avg: 6690.8 / Max: 6741.5

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BCore i5 124001240013400Core i5 134001020304050SE +/- 0.13, N = 342.6642.7743.3743.61
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BCore i5 124001240013400Core i5 13400918273645Min: 43.17 / Avg: 43.37 / Max: 43.61

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-5012400Core i5 12400Core i5 1340013400510152025SE +/- 0.02, N = 319.3219.2818.9418.92
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-5012400Core i5 12400Core i5 1340013400510152025Min: 18.88 / Avg: 18.92 / Max: 18.95

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 256Core i5 1340013400Core i5 12400124005K10K15K20K25KSE +/- 42.71, N = 324114.223946.223659.623631.1
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 256Core i5 1340013400Core i5 12400124004K8K12K16K20KMin: 23862 / Avg: 23946.17 / Max: 24000.9

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-5012400Core i5 1240013400Core i5 13400510152025SE +/- 0.02, N = 319.7219.7119.4719.43
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-5012400Core i5 1240013400Core i5 13400510152025Min: 19.42 / Avg: 19.47 / Max: 19.49

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Core i5 134001340012400Core i5 12400510152025SE +/- 0.01, N = 318.9718.9418.7318.70
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50Core i5 134001340012400Core i5 12400510152025Min: 18.92 / Avg: 18.94 / Max: 18.95

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 1024Core i5 1240012400Core i5 1340013400306090120150SE +/- 0.07, N = 3125.8124.8124.6124.4
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 1024Core i5 1240012400Core i5 134001340020406080100Min: 124.3 / Avg: 124.37 / Max: 124.5

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 12812400Core i5 12400Core i5 1340013400306090120150SE +/- 0.07, N = 3125.3125.3124.8124.2
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 12812400Core i5 12400Core i5 134001340020406080100Min: 124.1 / Avg: 124.23 / Max: 124.3

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13400Core i5 1340012400Core i5 1240048121620SE +/- 0.01, N = 314.6014.6114.6814.72
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13400Core i5 1340012400Core i5 1240048121620Min: 14.58 / Avg: 14.6 / Max: 14.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13400Core i5 1340012400Core i5 124001530456075SE +/- 0.04, N = 368.4868.4568.0967.91
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13400Core i5 1340012400Core i5 124001326395265Min: 68.4 / Avg: 68.48 / Max: 68.54

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13400Core i5 1340012400Core i5 12400816243240SE +/- 0.06, N = 335.0535.0434.7634.76
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13400Core i5 1340012400Core i5 12400714212835Min: 34.94 / Avg: 35.05 / Max: 35.14

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13400Core i5 1340012400Core i5 12400714212835SE +/- 0.05, N = 328.5228.5328.7628.76
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13400Core i5 1340012400Core i5 12400612182430Min: 28.46 / Avg: 28.52 / Max: 28.62

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 512Core i5 1240012400Core i5 1340013400306090120150SE +/- 0.10, N = 3125.4124.9124.6124.4
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 512Core i5 1240012400Core i5 134001340020406080100Min: 124.3 / Avg: 124.4 / Max: 124.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 256Core i5 13400Core i5 1240012400134006K12K18K24K30KSE +/- 213.65, N = 329664.729557.229540.229512.3
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 256Core i5 13400Core i5 1240012400134005K10K15K20K25KMin: 29130.7 / Avg: 29512.3 / Max: 29869.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 256Core i5 1340012400Core i5 1240013400306090120150SE +/- 0.00, N = 3124.9124.9124.9124.3
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 256Core i5 1340012400Core i5 124001340020406080100Min: 124.3 / Avg: 124.3 / Max: 124.3

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

Connections: 4000

Core i5 12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Core i5 13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Connections: 1000

Core i5 12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Core i5 13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Connections: 500

Core i5 12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Core i5 13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Connections: 200

Core i5 12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Core i5 13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Connections: 100

Core i5 12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Core i5 13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Connections: 20

Core i5 12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

12400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Core i5 13400: The test quit with a non-zero exit status. E: unable to connect to 127.0.0.1:8089 Connection refused

Connections: 1

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

Core i5 12400: The test quit with a non-zero exit status.

12400: The test quit with a non-zero exit status.

13400: The test quit with a non-zero exit status.

Core i5 13400: The test quit with a non-zero exit status.

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

Hash: MeowHash x86_64 AES-NI

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha0_aes_avx2 x86_64

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash32 x86_64 AVX

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha2_atonce

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash128

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: fasthash32

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: Spooky32

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: SHA3-256

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: wyhash

Core i5 12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

12400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Core i5 13400: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

169 Results Shown

OpenVINO:
  Weld Porosity Detection FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
QuadRay:
  5 - 4K
  5 - 1080p
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
QuadRay:
  2 - 4K
  3 - 1080p
  1 - 4K
  2 - 1080p
  3 - 4K
  1 - 1080p
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
spaCy
OpenRadioss:
  Rubber O-Ring Seal Installation
  Cell Phone Drop Test
Stargate Digital Audio Workstation:
  480000 - 1024
  44100 - 1024
  96000 - 1024
  192000 - 1024
  44100 - 512
  480000 - 512
  192000 - 512
  96000 - 512
miniBUDE:
  OpenMP - BM1:
    GFInst/s
    Billion Interactions/s
BRL-CAD
OpenRadioss
OpenVINO:
  Face Detection FP16 - CPU:
    FPS
    ms
uvg266
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
uvg266
libavif avifenc
nekRS
libavif avifenc
uvg266
OpenVINO
Kvazaar
OpenVINO
Blender
uvg266
OpenVINO:
  Person Detection FP32 - CPU
  Person Detection FP16 - CPU
libavif avifenc
Timed Linux Kernel Compilation
Blender:
  Pabellon Barcelona - CPU-Only
  Classroom - CPU-Only
uvg266
Timed Linux Kernel Compilation
Blender:
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
uvg266
libavif avifenc
uvg266:
  Bosphorus 4K - Ultra Fast
  Bosphorus 4K - Super Fast
Kvazaar
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
Kvazaar:
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Slow
  Bosphorus 4K - Medium
  Bosphorus 4K - Ultra Fast
  Bosphorus 4K - Very Fast
uvg266:
  Bosphorus 4K - Slow
  Bosphorus 4K - Medium
OpenVKL
OpenRadioss
Kvazaar
OpenVINO:
  Vehicle Detection FP16 - CPU:
    ms
    FPS
OpenVKL
SVT-AV1:
  Preset 8 - Bosphorus 4K
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 1080p
CockroachDB
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
libavif avifenc
CockroachDB:
  KV, 95% Reads - 256
  KV, 95% Reads - 128
  KV, 95% Reads - 1024
EnCodec
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
EnCodec
ClickHouse
CockroachDB
ClickHouse
CockroachDB
EnCodec
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
OpenRadioss
EnCodec
OpenVINO
SVT-AV1
CockroachDB
Xmrig
OpenVINO
SVT-AV1:
  Preset 12 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
TensorFlow
CockroachDB
ClickHouse
TensorFlow:
  CPU - 16 - GoogLeNet
  CPU - 32 - GoogLeNet
Xmrig
JPEG XL Decoding libjxl
SVT-AV1
CockroachDB
Neural Magic DeepSparse
CockroachDB
JPEG XL libjxl
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream
JPEG XL libjxl
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
JPEG XL libjxl
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
SVT-AV1
JPEG XL libjxl
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
TensorFlow
Neural Magic DeepSparse
OpenVINO
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
TensorFlow
OpenVINO
TensorFlow
JPEG XL libjxl
spaCy
Y-Cruncher
JPEG XL Decoding libjxl
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
JPEG XL libjxl
CockroachDB:
  KV, 50% Reads - 128
  KV, 10% Reads - 256
  KV, 60% Reads - 128
  KV, 10% Reads - 128
Y-Cruncher
TensorFlow
CockroachDB
TensorFlow:
  CPU - 64 - ResNet-50
  CPU - 16 - ResNet-50
CockroachDB:
  MoVR - 1024
  MoVR - 128
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
CockroachDB:
  MoVR - 512
  KV, 60% Reads - 256
  MoVR - 256