extra tests2

AMD EPYC 9334 32-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310249-NE-EXTRATEST98
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

CPU Massive 5 Tests
Creator Workloads 9 Tests
Database Test Suite 2 Tests
Game Development 3 Tests
HPC - High Performance Computing 6 Tests
Java Tests 2 Tests
Machine Learning 3 Tests
Multi-Core 10 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 5 Tests
Python Tests 2 Tests
Renderers 2 Tests
Server 2 Tests
Server CPU Tests 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 07 2023
  5 Hours, 38 Minutes
b
October 07 2023
  5 Hours, 6 Minutes
c
October 07 2023
  4 Hours, 48 Minutes
d
October 20 2023
  5 Hours, 58 Minutes
e
October 20 2023
  6 Hours, 40 Minutes
f
October 21 2023
  6 Hours, 19 Minutes
g
October 22 2023
  5 Hours, 59 Minutes
h
October 24 2023
  6 Hours, 2 Minutes
i
October 24 2023
  6 Hours, 8 Minutes
j
October 24 2023
  5 Hours, 52 Minutes
Invert Hiding All Results Option
  5 Hours, 51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


extra tests2ProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen Resolutionabcdefghij2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads)Supermicro H13DSH (1.5 BIOS)24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbAlmaLinux 9.25.14.0-284.25.1.el9_2.x86_64 (x86_64)GCC 11.3.1 20221121ext41024x768AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads)Supermicro H13SSW (1.1 BIOS)12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123NAMD EPYC 9334 32-Core @ 2.70GHz (32 Cores / 64 Threads)DELL E207WFP1680x1050OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- a: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- e: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- f: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- g: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- h: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- i: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- j: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Details- OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Details- Python 3.9.16Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdefghijResult OverviewPhoronix Test Suite100%143%187%230%273%BlenderBRL-CADIntel Open Image DenoiseEmbreeSPECFEM3DOSPRayTiDB Community ServerTimed Linux Kernel CompilationRemhosOpenVINONeural Magic DeepSparseApache CassandraApache HadoopLiquid-DSPSVT-AV1nekRS

extra tests2specfem3d: Tomographic Modelspecfem3d: Homogeneous Halfspacespecfem3d: Mount St. Helensspecfem3d: Layered Halfspacebrl-cad: VGR Performance Metricremhos: Sample Remap Examplequantlib: Single-Threadedspecfem3d: Water-layered Halfspacequantlib: Multi-Threadednekrs: Kershawnekrs: TurboPipe Periodicdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamtidb: oltp_update_index - 16deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamtidb: oltp_read_write - 32deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - f32 - CPUtidb: oltp_read_write - 64onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUtidb: oltp_update_non_index - 1tidb: oltp_update_non_index - 32onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUtidb: oltp_point_select - 128onednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUtidb: oltp_update_non_index - 128tidb: oltp_read_write - 1onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUtidb: oltp_read_write - 16tidb: oltp_point_select - 64tidb: oltp_point_select - 32tidb: oltp_point_select - 16tidb: oltp_update_index - 1onednn: Recurrent Neural Network Inference - f32 - CPUtidb: oltp_update_index - 64onednn: IP Shapes 3D - u8s8f32 - CPUtidb: oltp_update_index - 128onednn: IP Shapes 3D - bf16bf16bf16 - CPUtidb: oltp_update_non_index - 16onednn: Recurrent Neural Network Training - f32 - CPUtidb: oltp_update_non_index - 64tidb: oltp_read_write - 128tidb: oltp_point_select - 1onednn: IP Shapes 3D - f32 - CPUtidb: oltp_update_index - 32onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUkripke: build-linux-kernel: defconfigsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer - Crownembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Crownoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyopenvkl: vklBenchmarkCPU Scalaropenvkl: vklBenchmarkCPU ISPCospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeliquid-dsp: 1 - 256 - 32liquid-dsp: 1 - 256 - 57liquid-dsp: 2 - 256 - 32liquid-dsp: 2 - 256 - 57easywave: e2Asean Grid + BengkuluSept2007 Source - 240easywave: e2Asean Grid + BengkuluSept2007 Source - 1200easywave: e2Asean Grid + BengkuluSept2007 Source - 2400liquid-dsp: 4 - 256 - 32liquid-dsp: 4 - 256 - 57liquid-dsp: 8 - 256 - 32liquid-dsp: 8 - 256 - 57liquid-dsp: 1 - 256 - 512liquid-dsp: 16 - 256 - 32liquid-dsp: 16 - 256 - 57liquid-dsp: 2 - 256 - 512liquid-dsp: 32 - 256 - 32liquid-dsp: 32 - 256 - 57liquid-dsp: 4 - 256 - 512liquid-dsp: 64 - 256 - 32liquid-dsp: 64 - 256 - 57liquid-dsp: 8 - 256 - 512liquid-dsp: 96 - 256 - 32liquid-dsp: 96 - 256 - 57liquid-dsp: 16 - 256 - 512liquid-dsp: 32 - 256 - 512liquid-dsp: 64 - 256 - 512liquid-dsp: 96 - 256 - 512hadoop: Open - 50 - 100000hadoop: Open - 100 - 100000hadoop: Open - 50 - 1000000hadoop: Create - 50 - 100000hadoop: Delete - 50 - 100000hadoop: Open - 100 - 1000000hadoop: Rename - 50 - 100000hadoop: Create - 100 - 100000hadoop: Create - 50 - 1000000hadoop: Delete - 100 - 100000hadoop: Delete - 50 - 1000000hadoop: Rename - 100 - 100000hadoop: Rename - 50 - 1000000hadoop: Create - 100 - 1000000hadoop: Delete - 100 - 1000000hadoop: Rename - 100 - 1000000hadoop: File Status - 50 - 100000hadoop: File Status - 100 - 100000hadoop: File Status - 50 - 1000000hadoop: File Status - 100 - 1000000cassandra: Writesabcdefghij12.31294665215.1051177311.02476577526.88598380477216216.34626.98502090811106900000676771000039.4951605.03881417.070616.9074672.463535.628201.3925118.7509485.672549.36535137.01144.6508215.6383111.013149.3258485.7175489.120349.0092218.1464109.8027322.250574.321168.5988347.6612718.918933.341212558158.924150.58975897439.4391605.758790901328287351592425110525403833112756710462712122708718095412818575743311836130.41393.6282.5542.44283.9742.242033.175.8956.01213.945882.912.03748.4416.022873.244.172945.2616.269837.584.86842.9114.23317.2237.85776.948.282454.094.881560.0330.7286884.640.541244.6938.5120606.380.3427.3545.20390.811163.459163.01312.477141.219422.994510.36126.266.4233.22254.8880.5454.901756.087160.144953.573367.337856.48531.831.840.8615.98615.9528215.09614.236913.873916.346839499000594010007718100011749000015385000019622000030754000036943000013909000594230000699740000279010001183500000119210000052911000220770000019944000001098700003005800000255980000021608000042581000062256000071164000046082942016811261264364991075215332705224073353665875669893275529732394614590114730785291015154642173913188679224809512.10059594714.4605869811.31849570928.6521086376851716.79129.46076119711240300000675736000039.4681605.73071403.06517.0669672.373435.6422201.2528118.9482488.126449.10825138.83414.6476215.9254110.915249.1666487.3599489.446448.9703219.5263109.2332321.182974.555368.6617347.2189717.969333.3798159.0596150.60556152039.4539606.66938018313122891415972825103695013080210618067515243712746418068397598909944051781730.44393.23284.2242.2284.9942.092028.015.9156.06213.625836.272.05750.4915.982880.584.162986.4616.029849.074.85854.5114.03317.2837.795780.448.272450.264.891546.023187359.230.541239.6738.66120728.220.3427.2415.14991.322166.378166.69212.591138.338427.686542.61126.2466.6433.17255.380.7655.392556.455159.912453.813567.195156.69021.831.840.8615.978515.9888214.07414.178313.766616.43653948600059296000770190001140100001536900001965900003051100003669300001402100060247000069276000027736000119030000012142000005558800022121000002001900000108080000299540000025711000002161500004296200006109500007181400004694844048581020408412887380117382273046374255211990827973146934871679444378671572129862069458716194174816197025666112.04091787714.80827336511.3273597727.49085015776252916.24327.06023507910826700000675417000039.4464605.91831418.904116.8863671.255435.6825201.5402118.7801489.110649.01735153.66444.6348215.647111.02547.1535507.4786487.052249.2055218.5162109.5847321.508274.503168.6287347.3728716.140433.463712681164.6131145.25625963039.4183605.87657846913811499625286524853736865406118923324265463910644711756530.43393.37282.6742.43284.3142.192029.795.956.02213.795840.532.05757.3815.832881.144.162987.3316.029845.274.86849.314.12317.3337.795802.658.242455.514.881551.6330.8986789.80.541237.2938.75123484.280.3427.4085.04990.417163.055161.49512.617143.545431.895516.90626.1266.7233.03254.7280.4155.403756.807859.792953.692767.503856.93271.831.820.8715.987215.9778214.13614.139913.831716.535394530005751900076924000118550000153670000194510000306760000366990000142250006036500006749300002822700011848000001254800000551650002206800000201030000010914000029998000002564900000214910000424400000622630000715030000401606403226683995439379058018587477101350755226073475901476715974638440019703166827657895729927284252189393927048027.33098558835.57168490826.73641411871.61429432729806430.76162.44174958510318900000793457000013.0694606.101508.08715.7246257.272831.052571.137112.2499162.850249.08631599.20794.99671.9189111.108116.1648493.596163.555948.872972.455110.1114108.90973.305824.4833325.8763240.552933.22421262255.6132143.76434697713.1312606.57732.13332553341.558241693262731.337891294923.815760.6282363.059913.377820.8478051.913742.494080.6522591.037491643.9936480115675981491479838.516211080.603951.02875185631641.92342245972758981.25758176121642.51849.163847.3810.47761.59107.0274.71106.974.81797.6410.0120.03398.522564.783.11344.6723.21175.676.791039.6115.363540.884.51370.5721.57124.1264.412013.777.931036.997.7532.5930.0232002.620.49395.6640.444958.070.3524099450055.1734.10766.988163.189161.85410.91118.946526.216604.98672182.9990.03670.87224.1521.481122.58524.687222.261428.364323.873324.84522.351721.894327.743823.353122.39220.720.720.340.720.720.341914875.574695.57001151.9055.607475.453296.587453522800052665000670540001056500001.65738.10598.981386000001889300002780300003633100001268300054536000068915000024627000104710000010350000005025800010595000001093300000995940001065200000112080000019385000027376000028292000028625000057803552910127831958617101010124843982372579717213410570811101282102839217129611261381208632911591716181818260060119786627.45982130835.03013488926.79914344670.18902850629612530.84562.32514682810264000000793101000012.9433607.913511.409815.6245257.89430.986771.2727112.0574163.136149.00941599.15434.985971.9146111.088816.1392494.2575162.929849.059872.6571109.9654109.093873.260924.4725325.7416240.234933.26251256755.4634144.10134673713.1187606.7552.1257538931.549111708262851.338611299043.844210.6339753.06373.384360.8444341.917812.565220.657611.144324213832091643.973678411865796907702501490849.712212710.575794246111.05425185571641338816014559761.28043171171639.36851.659841.07810.47761.16107.2774.5107.2474.54793.7510.0620398.912562.543.11342.8123.321174.66.81039.8215.363544.184.51373.6421.4123.6164.682007.537.961028.647.77530.9930.132032.060.49432.3236.9844933.270.3523624390055.0934.11467.721162.608162.05110.984119.307525.173597.01171.44182.5690.31670.64224.121.435722.569424.734322.157728.314123.939324.828222.291121.990927.829323.528322.34220.720.720.340.720.720.341904875.541075.56353151.5065.62045.461536.58273531500052827000688460001054800001.65438.06799.41513862000019123000027778000035799000012366000545140000692920000252070001046600000103200000050380000105750000010954000009700500010651000001117800000196040000273480000281830000285880000552486294985251004586171006041204819822375882470897980391133278382284041700571132258436038910561349732092423562719579826.97375739535.53507300126.87316845570.54225590529560330.72561.2817691249976450000795579000013.0853607.8171508.208815.721257.504631.029271.044112.413162.929449.07121600.52754.987771.9401110.885216.1307494.2211162.897649.071472.5736110.0001109.0973.216324.5238324.9568240.164233.27511269255.5428143.69224714113.0934606.78522.13062549561.572821697266951.341831303893.818230.6303253.056743.379560.8506911.914222.497140.6531821.001364142432181642.353612511909297368701051483851.494210670.600834248301.061441636.76344706031059541.206531636.44849.344845.30810.48760.57107.3974.43106.7674.87791.7410.0920.01399.242539.973.14343.4923.281180.856.761038.4715.383548.784.5369.2621.65124.364.312004.767.971041.877.67533.7429.9531951.640.49431.9437.0144968.430.3523659100055.1484.13867.393161.847160.79810.736118.486521.518585.36871.96181.790.26667.87223.9521.591322.656624.704722.14928.323723.935424.88722.267621.774627.82623.504222.44070.720.720.340.720.720.341914885.57325.55581151.785.614545.452276.595633527100052879000688610001057400001.65738.01597.9871385800001898800002763900003504500001268100054502000069334000025199000104190000010246000004997700010571000001094600000994410001065300000112050000019450000027339000028303000028592000057803552356012210015834396993130378181633593826992099404111198794918250170537110803858157092204784691795332196463719628727.74647516235.37860002127.69663137169.95560916529552230.7562.81092437610500600000796491000013.073607.1628509.13915.6892257.280831.057470.9275112.4779162.993749.02591602.52214.978771.9003110.975916.0672495.6033163.228248.983772.6879109.9037109.219173.186724.4607325.5075239.517633.36741262755.4264144.10534699313.0596608.71632.11813553011.5511817051.335643.823810.6291083.054583.381560.8434921.912742.514410.64771.127234169531951641.43608811854996840699231481848.0320.61232245741.04567187351637.3734107599441.27918171351631.99837.595847.41710.48759.92107.0474.71107.2474.58793.910.0620.05398.132557.663.12341.3623.421175.586.791039.3715.373533.644.52372.2621.47123.4164.772006.097.961031.67.74538.0129.7232008.030.49432.236.9845097.990.3523717570055.1724.14367.811160.322161.32411.016118.481528.533586.74872.01183.2990.63669.09224.1221.584722.774524.819322.190128.479323.879624.961922.255921.830527.9123.709122.42150.720.720.340.720.720.341914895.575535.56539151.6815.622785.477256.600853523600052854000686780001048000001.64837.9597.529138460000190750000277410000357810000122560005430500006820700002272700010471000001033400000495560001056200000109930000010017000010657000001118200000194670000274070000281730000286530000546448460829654022606801039501107420822375892872706102564110828803868481070922113895857635617984878052036660204918019709215.98949194819.96151934815.03269371140.32936342857250020.4423357.936.445871583116781.59145900000676127000025.7096613.05461003.517115.9249507.63431.4896136.9614116.5064327.019248.88123324.81824.8005144.1181110.810832.0042495.2237326.927148.8889145.4778109.7922216.238473.880247.1323337.9568488.064832.752616965109.0251146.11917548625.7916613.16391.14749955790.7785431861360410.7344612003273.722470.4264261.78171.733810.4400061.0411.742030.8927012.0988658163480987.83747576138538872181666564.12313320.30246371260.7045523794986.9585119110418061250.92628324286993.564568.747568.64919.82804.58197.9480.77196.0781.581481.7110.7837.82421.84803.653.32643.824.842252.77.091963.916.276653.864.8710.6922.5234.1868.273780.88.462180.837.331034.3530.9259615.880.53810.7139.4468931.350.3535480800033.9455.07599.099230.026223.41612.23151.437580.163726.88938.499.5449.1352.4119.342.893245.475748.118843.505654.388846.378548.906843.844643.572154.220345.923445.191.371.380.651.371.380.6536392610.798810.7832192.64510.852410.591312.54383714500051443000724680001052800001.28426.10668.57314582000020093000029262000037833000012909000585850000741420000256480001172400000135290000052890000205750000019168000001044400002068300000199550000020759000039168000051202000051970000064516152631612330465872010040232372983056604967341110526311340479808851217361111178284310925926595238426439235294132457515.59023785919.61963726515.19244731839.83358978857045820.3023365.437.5052638491171999269890000676807000025.6331612.9837999.461915.9895508.360431.4411136.684116.8908327.731448.79013327.18914.8015144.0005110.983331.9586494.4953326.775748.8928145.6474109.7363216.216373.940447.3497336.7673486.706832.835416817109.4649145.83257525425.704613.64021.15012942610.798541848356500.7350941981373.659070.4275121.788761.734990.4403681.043121.644781.94191.243083485985.74347501180179138173864711656566.178305220.301535361410.64425223541991.0735034610462061650.93179323773987.355564.454563.33419.83804.75193.882.49197.6680.881488.0410.7437.57425.684848.423.29642.924.872264.937.051964.6116.276646.914.81709.7322.53233.6568.423777.758.462224.087.181036.1130.8759654.980.53815.1339.2368945.320.3535015120033.8695.07998.573227.865227.20512.259149.051591.319728.53838.5799.3548.69351.66119.4243.268145.400648.190843.573654.506346.363348.71843.767443.811654.187746.087245.71061.371.380.651.371.380.6536392210.74310.7904192.410.882910.584812.49873714100055841000724910001097000001.28826.39368.23614592000019864000029243000037725000013363000583870000735800000259100001172900000139410000052129000205260000018997000001047800002069900000199700000020972000039320000051231000052014000058139555555625367858997102249382995908275903272464102041110436823728631172010114692852158695658474581930502250626631810915.84389531719.72558666315.09991232639.86742377756906620.363371.637.808113034117263.39242080000683536000025.727614.23031002.811315.9389507.364531.5024136.5656116.8464327.185548.82913324.69524.8016144.0515110.903832.0639494.0076326.90448.8934145.3726109.8573216.50873.77747.3339336.675487.229832.808516972109.4068145.90157425225.769612.52091.155780.768314356550.7316181977383.680870.430271.786911.735010.4401561.043331.754530.8800161.94941640663479991.12347414180581137618874121660569.795306380.309278366440.7149723543988.589503041058020.93600124366994.612563.575566.38619.84805.38194.8282.09196.2681.51483.2510.7737.48425.884892.273.26648.2724.672254.727.081964.7116.276638.244.81709.7522.53233.8868.353783.658.452194.367.281012.9831.5759505.780.53812.7639.3468895.50.3534901980033.9345.1699.345224.406228.76512.175149.451584.12726.50338.5199.2948.82351.38119.0443.314145.322748.095643.416154.515746.432748.711543.904343.933554.147346.023245.39561.371.380.651.371.380.6536392210.693210.7664192.70110.874310.57412.51043712000055715000724000001113700001.24525.55568.5191457400002001200002924900003816600001289900058567000073712000026378000116990000013698000004978100020565000001922300000104220000207100000019995000002094900003932700005110700005194400006172843703701251564593479560228910189206580057315397371113572834728855072754110693871617518806849321941748558036316485OpenBenchmarking.org

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modeljihgfedcba71421283515.8415.5915.9927.7526.9727.4627.3312.0412.1012.311. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacejihgfedcba81624324019.7319.6219.9635.3835.5435.0335.5714.8114.4615.111. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensjihgfedcba71421283515.1015.1915.0327.7026.8726.8026.7411.3311.3211.021. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspacejihgfedcba163248648039.8739.8340.3369.9670.5470.1971.6127.4928.6526.891. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricjihgfedcba170K340K510K680K850K5690665704585725002955222956032961252980647625297685177721621. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Remhos

Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Examplejihgfedcba71421283520.3620.3020.4430.7530.7330.8530.7616.2416.7916.351. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Single-Threadedjih70014002100280035003371.63365.43357.91. (CXX) g++ options: -O3 -march=native -fPIE -pie

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspacejihgfedcba142842567037.8137.5136.4562.8161.2862.3362.4427.0629.4626.991. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threadedjih30K60K90K120K150K117263.3117199.0116781.51. (CXX) g++ options: -O3 -march=native -fPIE -pie

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawjihgfedcba2000M4000M6000M8000M10000M92420800009269890000914590000010500600000997645000010264000000103189000001082670000011240300000111069000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicjihgfedcba2000M4000M6000M8000M10000M68353600006768070000676127000079649100007955790000793101000079345700006754170000675736000067677100001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

Model: Bumper Beam

a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

Model: Chrysler Neon 1M

a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

Model: Cell Phone Drop Test

a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

Model: Bird Strike on Windshield

a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

Model: Rubber O-Ring Seal Installation

a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

Model: INIVOL and Fluid Structure Interaction Drop Container

a: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

b: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

c: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

d: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

e: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

f: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

g: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

h: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

i: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

j: The test run did not produce a result. E: ./engine_linux64_gf_ompi: error while loading shared libraries: libmpi.so.40: cannot open shared object file: No such file or directory

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamjihgfedcba91827364525.7325.6325.7113.0713.0912.9413.0739.4539.4739.50

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamjihgfedcba130260390520650614.23612.98613.05607.16607.82607.91606.10605.92605.73605.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamjihgfedcba300600900120015001002.81999.461003.52509.14508.21511.41508.091418.901403.071417.07

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamjihgfedcba4812162015.9415.9915.9215.6915.7215.6215.7216.8917.0716.91

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamjihgfedcba150300450600750507.36508.36507.63257.28257.50257.89257.27671.26672.37672.46

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamjihgfedcba81624324031.5031.4431.4931.0631.0330.9931.0535.6835.6435.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamjihgfedcba4080120160200136.57136.68136.9670.9371.0471.2771.14201.54201.25201.39

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamjihgfedcba306090120150116.85116.89116.51112.48112.41112.06112.25118.78118.95118.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamjihgfedcba110220330440550327.19327.73327.02162.99162.93163.14162.85489.11488.13485.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamjihgfedcba112233445548.8348.7948.8849.0349.0749.0149.0949.0249.1149.37

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamjihgfedcba110022003300440055003324.703327.193324.821602.521600.531599.151599.215153.665138.835137.01

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamjihgfedcba1.12412.24823.37234.49645.62054.80164.80154.80054.97874.98774.98594.99604.63484.64764.6508

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamjihgfedcba50100150200250144.05144.00144.1271.9071.9471.9171.92215.65215.93215.64

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamjihgfedcba20406080100110.90110.98110.81110.98110.89111.09111.11111.03110.92111.01

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamjihgfedcba112233445532.0631.9632.0016.0716.1316.1416.1647.1549.1749.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamjihgfedcba110220330440550494.01494.50495.22495.60494.22494.26493.60507.48487.36485.72

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamjihgfedcba110220330440550326.90326.78326.93163.23162.90162.93163.56487.05489.45489.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamjihgfedcba112233445548.8948.8948.8948.9849.0749.0648.8749.2148.9749.01

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamjihgfedcba50100150200250145.37145.65145.4872.6972.5772.6672.46218.52219.53218.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamjihgfedcba20406080100109.86109.74109.79109.90110.00109.97110.11109.58109.23109.80

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamjihgfedcba70140210280350216.51216.22216.24109.22109.09109.09108.91321.51321.18322.25

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamjihgfedcba2040608010073.7873.9473.8873.1973.2273.2673.3174.5074.5674.32

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamjihgfedcba153045607547.3347.3547.1324.4624.5224.4724.4868.6368.6668.60

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamjihgfedcba80160240320400336.68336.77337.96325.51324.96325.74325.88347.37347.22347.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamjihgfedcba160320480640800487.23486.71488.06239.52240.16240.23240.55716.14717.97718.92

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamjihgfedcba81624324032.8132.8432.7533.3733.2833.2633.2233.4633.3833.34

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 16jihgfedca4K8K12K16K20K169721681716965126271269212567126221268112558

Test: oltp_update_index - Threads: 16

b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamjihgfedcba4080120160200109.41109.46109.0355.4355.5455.4655.61164.61159.06158.92

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamjihgfedcba306090120150145.90145.83146.12144.11143.69144.10143.76145.26150.61150.59

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 32jihgfedcba16K32K48K64K80K74252752547548646993471414673746977596306152058974

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamjihgfedcba91827364525.7725.7025.7913.0613.0913.1213.1339.4239.4539.44

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamjihgfedcba130260390520650612.52613.64613.16608.72606.79606.76606.58605.88606.67605.76

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUjihgfed0.480.961.441.922.41.155781.150121.147492.118132.130622.125702.13332MIN: 1.03MIN: 1MIN: 1.01MIN: 1.99MIN: 1.97MIN: 2.01MIN: 21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 64ihgfedcba20K40K60K80K100K942619557955301549565389355334784698018379090

Test: oltp_read_write - Threads: 64

j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUjihgfed0.35390.70781.06171.41561.76950.7683140.7985400.7785431.5511801.5728201.5491101.558240MIN: 0.71MIN: 0.7MIN: 0.71MIN: 1.52MIN: 1.53MIN: 1.51MIN: 1.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 1ihgfedcba400800120016002000184818611705169717081693138113121328

Test: oltp_update_non_index - Threads: 1

j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 32jihfedba8K16K24K32K40K3565535650360412669526285262732891428735

Test: oltp_update_non_index - Threads: 32

c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUjihgfed0.30190.60380.90571.20761.50950.7316180.7350940.7344611.3356401.3418301.3386101.337890MIN: 0.66MIN: 0.66MIN: 0.66MIN: 1.31MIN: 1.31MIN: 1.31MIN: 1.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 128jihfedcba40K80K120K160K200K197738198137200327130389129904129492149962159728159242

Test: oltp_point_select - Threads: 128

g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUjihgfed0.86491.72982.59473.45964.32453.680873.659073.722473.823813.818233.844213.81576MIN: 2.85MIN: 2.81MIN: 2.83MIN: 3.29MIN: 3.25MIN: 3.27MIN: 3.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUjihgfed0.14260.28520.42780.57040.7130.4302700.4275120.4264260.6291080.6303250.6339750.628236MIN: 0.38MIN: 0.39MIN: 0.38MIN: 0.6MIN: 0.6MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUjihgfed0.68931.37862.06792.75723.44651.786911.788761.781703.054583.056743.063703.05991MIN: 1.66MIN: 1.65MIN: 1.64MIN: 2.97MIN: 2.97MIN: 2.97MIN: 2.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUjihgfed0.76151.5232.28453.0463.80751.735011.734991.733813.381563.379563.384363.37782MIN: 1.64MIN: 1.65MIN: 1.64MIN: 3.33MIN: 3.33MIN: 3.33MIN: 3.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUjihgfed0.19140.38280.57420.76560.9570.4401560.4403680.4400060.8434920.8506910.8444340.847805MIN: 0.41MIN: 0.41MIN: 0.41MIN: 0.83MIN: 0.83MIN: 0.83MIN: 0.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUjihgfed0.43150.8631.29451.7262.15751.043331.043121.041001.912741.914221.917811.91374MIN: 0.94MIN: 0.94MIN: 0.94MIN: 1.88MIN: 1.88MIN: 1.88MIN: 1.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUjihgfed0.57721.15441.73162.30882.8861.754531.644781.742032.514412.497142.565222.49408MIN: 1.52MIN: 1.42MIN: 1.51MIN: 2.3MIN: 2.26MIN: 2.32MIN: 2.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUjihgfed0.43690.87381.31071.74762.18450.8800161.9419000.8927010.6477000.6531820.6576100.652259MIN: 0.78MIN: 0.87MIN: 0.79MIN: 0.57MIN: 0.57MIN: 0.57MIN: 0.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUjihgfed0.47220.94441.41661.88882.3611.949411.243082.098801.127231.001361.144321.03749MIN: 1.26MIN: 1.04MIN: 1.29MIN: 0.93MIN: 0.92MIN: 1.07MIN: 0.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 128jhgfeca14K28K42K56K70K64066658164169541424421385286551105

Test: oltp_update_non_index - Threads: 128

b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

i: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 1jihgfecba7001400210028003500347934853480319532183209248525102540

Test: oltp_read_write - Threads: 1

d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUjihgfed400800120016002000991.12985.74987.841641.401642.351643.971643.99MIN: 954.92MIN: 949.93MIN: 952.67MIN: 1589.91MIN: 1586.17MIN: 1590.89MIN: 1588.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 16jihgfedcba10K20K30K40K50K47414475014757636088361253678436480373683695038331

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 64jigfedba40K80K120K160K200K180581180179118549119092118657115675130802127567

Test: oltp_point_select - Threads: 64

c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

h: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 32jihgfedba30K60K90K120K150K13761813817313853896840973689690798149106180104627

Test: oltp_point_select - Threads: 32

c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 16jihgfecb20K40K60K80K100K8741286471872186992370105702506540667515

Test: oltp_point_select - Threads: 16

a: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 1jihgfedca400800120016002000166016561666148114831490147911891212

Test: oltp_update_index - Threads: 1

b: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUjihgfed2004006008001000569.80566.18564.12848.03851.49849.71838.52MIN: 548.08MIN: 544.56MIN: 545.13MIN: 807.34MIN: 807.97MIN: 805.98MIN: 796.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 64jihfedcb7K14K21K28K35K3063830522313322106721271211082332424371

Test: oltp_update_index - Threads: 64

a: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUjihgfed0.13780.27560.41340.55120.6890.3092780.3015350.3024600.6123200.6008340.5757940.603950MIN: 0.28MIN: 0.28MIN: 0.27MIN: 0.53MIN: 0.53MIN: 0.52MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 128jihgfecba8K16K24K32K40K366443614137126245742483024611265462746427087

Test: oltp_update_index - Threads: 128

d: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUjihgfed0.23880.47760.71640.95521.1940.7149700.6442520.7045501.0456701.0614401.0542501.028750MIN: 0.67MIN: 0.61MIN: 0.66MIN: 0.98MIN: 0.98MIN: 0.97MIN: 0.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 16jihgedba5K10K15K20K25K2354323541237941873518557185631806818095

Test: oltp_update_non_index - Threads: 16

c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

f: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUjihgfed400800120016002000988.59991.07986.961637.371636.761641.001641.92MIN: 950.96MIN: 953.96MIN: 949.02MIN: 1584.58MIN: 1585.98MIN: 1595.55MIN: 1584.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 64jihgfedcba11K22K33K44K55K50304503465119134107344703388134224391063975941281

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 128jihgfedba20K40K60K80K100K105802104620104180599446031060145597278909985757

Test: oltp_read_write - Threads: 128

c: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 1ihfedcba1300260039005200650061656125595459765898447144054331

Test: oltp_point_select - Threads: 1

g: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

j: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUjihgfed0.28810.57620.86431.15241.44050.9360010.9317930.9262831.2791801.2065301.2804301.257580MIN: 0.86MIN: 0.86MIN: 0.85MIN: 1.24MIN: 1.18MIN: 1.24MIN: 1.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

This is a PingCAP TiDB Community Server benchmark facilitated using the sysbench OLTP database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 32jihgedcba5K10K15K20K25K243662377324286171351711717612175651781718361

Test: oltp_update_index - Threads: 32

f: The test quit with a non-zero exit status. E: FATAL: Thread initialization failed!

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUjihgfed400800120016002000994.61987.36993.561631.991636.441639.361642.51MIN: 960.2MIN: 952.16MIN: 955.42MIN: 1581.62MIN: 1585.81MIN: 1581.93MIN: 1593.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUjihgfed2004006008001000563.58564.45568.75837.60849.34851.66849.16MIN: 543.65MIN: 545.57MIN: 546.79MIN: 796.61MIN: 805.8MIN: 809.45MIN: 806.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUjihgfed2004006008001000566.39563.33568.65847.42845.31841.08847.38MIN: 542.7MIN: 544.04MIN: 547.26MIN: 806.72MIN: 803.78MIN: 798.46MIN: 806.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUjihgfedcba71421283519.8419.8319.8210.4810.4810.4710.4730.4330.4430.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUjihgfedcba2004006008001000805.38804.75804.58759.92760.57761.16761.59393.37393.23393.60MIN: 783.22 / MAX: 819.23MIN: 776.93 / MAX: 819.19MIN: 772.52 / MAX: 820.63MIN: 737.63 / MAX: 771.07MIN: 741.4 / MAX: 770.88MIN: 741.99 / MAX: 776.56MIN: 738.34 / MAX: 772.36MIN: 362.57 / MAX: 433.51MIN: 360.87 / MAX: 433.13MIN: 363.29 / MAX: 431.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUjihgfedcba60120180240300194.82193.80197.94107.04107.39107.27107.02282.67284.22282.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUjihgfedcba2040608010082.0982.4980.7774.7174.4374.5074.7142.4342.2042.44MIN: 68.73 / MAX: 91.87MIN: 70.77 / MAX: 94.62MIN: 69.54 / MAX: 95.42MIN: 66.29 / MAX: 79.68MIN: 65.68 / MAX: 83.49MIN: 66.5 / MAX: 80.32MIN: 66.12 / MAX: 81.09MIN: 36.31 / MAX: 62.36MIN: 36.84 / MAX: 61.97MIN: 36.14 / MAX: 61.981. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUjihgfedcba60120180240300196.26197.66196.07107.24106.76107.24106.90284.31284.99283.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUjihgfedcba2040608010081.5080.8881.5874.5874.8774.5474.8142.1942.0942.24MIN: 68.9 / MAX: 92.66MIN: 39.72 / MAX: 92.54MIN: 68.74 / MAX: 95.81MIN: 67.63 / MAX: 78.73MIN: 66.72 / MAX: 80.96MIN: 65.97 / MAX: 82.9MIN: 66.88 / MAX: 80.7MIN: 36.21 / MAX: 65.64MIN: 37.13 / MAX: 58.71MIN: 36.59 / MAX: 61.561. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUjihgfedcba4008001200160020001483.251488.041481.71793.90791.74793.75797.642029.792028.012033.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUjihgfedcba369121510.7710.7410.7810.0610.0910.0610.015.905.915.89MIN: 6 / MAX: 18.16MIN: 5.92 / MAX: 24.44MIN: 5.59 / MAX: 21.13MIN: 5.2 / MAX: 19.38MIN: 5.4 / MAX: 19.17MIN: 5.29 / MAX: 19.07MIN: 5.7 / MAX: 19.52MIN: 4.83 / MAX: 13.4MIN: 4.84 / MAX: 12.9MIN: 4.67 / MAX: 18.41. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUjihgfedcba132639526537.4837.5737.8220.0520.0120.0020.0356.0256.0656.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUjihgfedcba90180270360450425.88425.68421.80398.13399.24398.91398.52213.79213.62213.94MIN: 404.76 / MAX: 434.06MIN: 402.91 / MAX: 432.03MIN: 269.94 / MAX: 598.22MIN: 379.09 / MAX: 404.71MIN: 387.9 / MAX: 408.93MIN: 386.2 / MAX: 407.29MIN: 382.1 / MAX: 404.98MIN: 197.29 / MAX: 236.32MIN: 197.2 / MAX: 235.23MIN: 201.64 / MAX: 242.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUjihgfedcba130026003900520065004892.274848.424803.652557.662539.972562.542564.785840.535836.275882.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUjihgfedcba0.7471.4942.2412.9883.7353.263.293.323.123.143.113.112.052.052.03MIN: 2.1 / MAX: 14.3MIN: 1.89 / MAX: 12.75MIN: 2.11 / MAX: 12.58MIN: 1.88 / MAX: 11.92MIN: 1.93 / MAX: 11.65MIN: 1.93 / MAX: 9.72MIN: 1.94 / MAX: 11.57MIN: 1.62 / MAX: 6.96MIN: 1.6 / MAX: 7MIN: 1.66 / MAX: 7.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUjihgfedcba160320480640800648.27642.90643.80341.36343.49342.81344.67757.38750.49748.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUjihgfedcba61218243024.6724.8724.8423.4223.2823.3223.2015.8315.9816.02MIN: 20.15 / MAX: 37.89MIN: 17 / MAX: 33.34MIN: 16.93 / MAX: 33.96MIN: 20.46 / MAX: 32.43MIN: 15.73 / MAX: 30.77MIN: 19.49 / MAX: 30.99MIN: 15.1 / MAX: 31.6MIN: 12.38 / MAX: 32.97MIN: 12.74 / MAX: 33.34MIN: 12.5 / MAX: 33.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUjihgfedcba60012001800240030002254.722264.932252.701175.581180.851174.601175.672881.142880.582873.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUjihgfedcba2468107.087.057.096.796.766.806.794.164.164.17MIN: 4.35 / MAX: 16.86MIN: 4.44 / MAX: 16.57MIN: 4.43 / MAX: 16.88MIN: 3.79 / MAX: 15.41MIN: 4.04 / MAX: 15.47MIN: 4.04 / MAX: 15.37MIN: 3.8 / MAX: 15.48MIN: 3.43 / MAX: 10.26MIN: 3.42 / MAX: 11.2MIN: 3.39 / MAX: 10.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUjihgfedcba60012001800240030001964.711964.611963.901039.371038.471039.821039.612987.332986.462945.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUjihgfedcba4812162016.2716.2716.2715.3715.3815.3615.3616.0216.0216.26MIN: 8.44 / MAX: 25.48MIN: 8.5 / MAX: 25.86MIN: 8.92 / MAX: 25.52MIN: 7.99 / MAX: 23.98MIN: 7.99 / MAX: 24MIN: 8.02 / MAX: 23.81MIN: 8.08 / MAX: 24.34MIN: 14.63 / MAX: 33.79MIN: 14.41 / MAX: 30.55MIN: 14.71 / MAX: 28.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUjihgfedcba2K4K6K8K10K6638.246646.916653.863533.643548.783544.183540.889845.279849.079837.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUjihgfedcba1.09352.1873.28054.3745.46754.814.814.804.524.504.514.514.864.854.86MIN: 3.23 / MAX: 14.45MIN: 3.23 / MAX: 15.04MIN: 3.23 / MAX: 14.95MIN: 2.77 / MAX: 13.57MIN: 2.98 / MAX: 13.86MIN: 2.96 / MAX: 16.06MIN: 2.98 / MAX: 13.05MIN: 4.34 / MAX: 12.27MIN: 4.25 / MAX: 12.86MIN: 4.23 / MAX: 12.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUjihgfedcba2004006008001000709.75709.73710.69372.26369.26373.64370.57849.30854.51842.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUjihgfedcba51015202522.5322.5322.5021.4721.6521.4021.5714.1214.0314.23MIN: 18.74 / MAX: 31.08MIN: 19.09 / MAX: 30.15MIN: 13.76 / MAX: 30.22MIN: 17.62 / MAX: 28.13MIN: 19.48 / MAX: 24.27MIN: 19.07 / MAX: 25.3MIN: 19.5 / MAX: 24.76MIN: 11.51 / MAX: 26.04MIN: 11.59 / MAX: 26.04MIN: 11.51 / MAX: 25.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUjihgfedcba70140210280350233.88233.65234.18123.41124.30123.61124.12317.33317.28317.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUjihgfedcba153045607568.3568.4268.2764.7764.3164.6864.4137.7937.7937.80MIN: 55.82 / MAX: 74.84MIN: 56.13 / MAX: 75.77MIN: 56.41 / MAX: 79.96MIN: 55.8 / MAX: 69.46MIN: 50.85 / MAX: 70.77MIN: 38.02 / MAX: 72.52MIN: 37.44 / MAX: 73.04MIN: 33.29 / MAX: 54.88MIN: 32.97 / MAX: 53.7MIN: 33.35 / MAX: 56.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUjihgfedcba120024003600480060003783.653777.753780.802006.092004.762007.532013.775802.655780.445776.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUjihgfedcba2468108.458.468.467.967.977.967.938.248.278.28MIN: 4.46 / MAX: 17.31MIN: 4.49 / MAX: 17.8MIN: 4.67 / MAX: 18MIN: 4.19 / MAX: 14.2MIN: 4.37 / MAX: 16.86MIN: 4.19 / MAX: 16.59MIN: 4.2 / MAX: 16.92MIN: 7.62 / MAX: 23.32MIN: 7.37 / MAX: 25.18MIN: 7.44 / MAX: 23.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUjihgfedcba50010001500200025002194.362224.082180.831031.601041.871028.641036.992455.512450.262454.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUjihgfedcba2468107.287.187.337.747.677.777.704.884.894.88MIN: 5.53 / MAX: 15.78MIN: 4.98 / MAX: 16.11MIN: 5.45 / MAX: 15.94MIN: 6.06 / MAX: 12.66MIN: 5.32 / MAX: 16.6MIN: 5.42 / MAX: 16.35MIN: 5.51 / MAX: 16.06MIN: 3.9 / MAX: 14.94MIN: 3.93 / MAX: 13.44MIN: 3.95 / MAX: 16.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUjihgfedcba300600900120015001012.981036.111034.35538.01533.74530.99532.591551.631546.021560.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUjihgfedcba71421283531.5730.8730.9229.7229.9530.1030.0230.8931.0030.72MIN: 20.39 / MAX: 39.22MIN: 20.13 / MAX: 42.34MIN: 25.94 / MAX: 41.77MIN: 19.46 / MAX: 38.99MIN: 19.01 / MAX: 38.08MIN: 22.61 / MAX: 39.15MIN: 18.78 / MAX: 38.72MIN: 29.48 / MAX: 36.29MIN: 29.59 / MAX: 36.33MIN: 29.51 / MAX: 35.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUjihgfedcba20K40K60K80K100K59505.7859654.9859615.8832008.0331951.6432032.0632002.6286789.8087359.2386884.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUjihgfedcba0.12150.2430.36450.4860.60750.530.530.530.490.490.490.490.540.540.54MIN: 0.31 / MAX: 7.18MIN: 0.31 / MAX: 10.06MIN: 0.31 / MAX: 10.06MIN: 0.3 / MAX: 8.84MIN: 0.3 / MAX: 8.2MIN: 0.3 / MAX: 9.07MIN: 0.3 / MAX: 9.28MIN: 0.45 / MAX: 5.03MIN: 0.45 / MAX: 7.81MIN: 0.45 / MAX: 7.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUjihgfedcba30060090012001500812.76815.13810.71432.20431.94432.32395.661237.291239.671244.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUjihgfedcba91827364539.3439.2339.4436.9837.0136.9840.4038.7538.6638.50MIN: 25.19 / MAX: 46.89MIN: 34.71 / MAX: 47.63MIN: 33.14 / MAX: 45.28MIN: 32.61 / MAX: 41.91MIN: 32.25 / MAX: 43.6MIN: 32.02 / MAX: 44.78MIN: 26.93 / MAX: 74.83MIN: 37.46 / MAX: 43.52MIN: 37.22 / MAX: 43.52MIN: 36.77 / MAX: 44.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUjihgfedcba30K60K90K120K150K68895.5068945.3268931.3545097.9944968.4344933.2744958.07123484.28120728.22120606.381. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUjihgfedcba0.07880.15760.23640.31520.3940.350.350.350.350.350.350.350.340.340.34MIN: 0.22 / MAX: 8.35MIN: 0.22 / MAX: 8.62MIN: 0.21 / MAX: 8.91MIN: 0.23 / MAX: 8.63MIN: 0.23 / MAX: 9.15MIN: 0.23 / MAX: 8.84MIN: 0.23 / MAX: 9.09MIN: 0.29 / MAX: 7.09MIN: 0.29 / MAX: 10.87MIN: 0.29 / MAX: 7.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6jihgfed80M160M240M320M400M3490198003501512003548080002371757002365910002362439002409945001. (CXX) g++ options: -O3 -fopenmp -ldl

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigjihgfedcba122436486033.9333.8733.9555.1755.1555.0955.1727.4127.2427.35

Build: allmodconfig

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

d: The test quit with a non-zero exit status.

e: The test quit with a non-zero exit status.

f: The test quit with a non-zero exit status.

g: The test quit with a non-zero exit status.

h: The test quit with a non-zero exit status.

i: The test quit with a non-zero exit status.

j: The test quit with a non-zero exit status.

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kjihgfedcba1.17072.34143.51214.68285.85355.1605.0795.0754.1434.1384.1144.1075.0495.1495.2031. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kjihgfedcba2040608010099.3598.5799.1067.8167.3967.7266.9990.4291.3290.811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kjihgfedcba50100150200250224.41227.87230.03160.32161.85162.61163.19163.06166.38163.461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kjihgfedcba50100150200250228.77227.21223.42161.32160.80162.05161.85161.50166.69163.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pjihgfedcba369121512.1812.2612.2311.0210.7410.9810.9112.6212.5912.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pjihgfedcba306090120150149.45149.05151.44118.48118.49119.31118.95143.55138.34141.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pjihgfedcba130260390520650584.12591.32580.16528.53521.52525.17526.22431.90427.69422.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pjihgfedcba160320480640800726.50728.54726.89586.75585.37597.01604.99516.91542.61510.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyjihgfedcba163248648038.5138.5738.4072.0171.9671.4472.0026.1226.2426.20

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyjihgfedcba408012016020099.2999.3599.54183.29181.70182.56182.9966.7266.6466.42

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyjihgfedcba2040608010048.8248.6949.1090.6390.2690.3190.0333.0333.1733.22

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyjihgfedcba140280420560700351.38351.66352.40669.09667.87670.64670.87254.72255.30254.88

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlyjihgfedcba50100150200250119.04119.42119.30224.12223.95224.10224.1580.4180.7680.54

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crownjihgfedcba122436486043.3143.2742.8921.5821.5921.4421.4855.4055.3954.90MIN: 42.83 / MAX: 44.44MIN: 42.82 / MAX: 44.23MIN: 42.47 / MAX: 43.88MIN: 21.43 / MAX: 21.89MIN: 21.45 / MAX: 21.84MIN: 21.3 / MAX: 21.78MIN: 21.32 / MAX: 21.8MIN: 53.71 / MAX: 58.99MIN: 54.02 / MAX: 57.64MIN: 53.27 / MAX: 57.28

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crownjihgfedcba132639526545.3245.4045.4822.7722.6622.5722.5956.8156.4656.09MIN: 44.74 / MAX: 46.66MIN: 44.87 / MAX: 46.45MIN: 44.92 / MAX: 46.64MIN: 22.57 / MAX: 23.16MIN: 22.45 / MAX: 22.99MIN: 22.39 / MAX: 22.93MIN: 22.39 / MAX: 22.98MIN: 55.27 / MAX: 59.91MIN: 54.53 / MAX: 59.89MIN: 54.05 / MAX: 59.82

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonjihgfedcba132639526548.1048.1948.1224.8224.7024.7324.6959.7959.9160.14MIN: 47.86 / MAX: 48.8MIN: 47.96 / MAX: 48.63MIN: 47.91 / MAX: 48.91MIN: 24.74 / MAX: 25MIN: 24.63 / MAX: 24.84MIN: 24.67 / MAX: 24.86MIN: 24.62 / MAX: 24.84MIN: 58.46 / MAX: 62.03MIN: 58.66 / MAX: 61.96MIN: 58.97 / MAX: 62

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objjihgfedcba122436486043.4243.5743.5122.1922.1522.1622.2653.6953.8153.57MIN: 43.14 / MAX: 43.89MIN: 43.34 / MAX: 44.03MIN: 43.26 / MAX: 44.02MIN: 22.12 / MAX: 22.33MIN: 22.07 / MAX: 22.32MIN: 22.08 / MAX: 22.35MIN: 22.18 / MAX: 22.42MIN: 52.63 / MAX: 55.24MIN: 52.72 / MAX: 55.86MIN: 52.17 / MAX: 55.38

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragonjihgfedcba153045607554.5254.5154.3928.4828.3228.3128.3667.5067.2067.34MIN: 54.24 / MAX: 55.1MIN: 54.22 / MAX: 55.08MIN: 54.12 / MAX: 55.15MIN: 28.37 / MAX: 28.69MIN: 28.23 / MAX: 28.55MIN: 28.21 / MAX: 28.56MIN: 28.26 / MAX: 28.59MIN: 65.64 / MAX: 71.17MIN: 65.48 / MAX: 70.41MIN: 65.61 / MAX: 70.54

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objjihgfedcba132639526546.4346.3646.3823.8823.9423.9423.8756.9356.6956.49MIN: 46.17 / MAX: 47.23MIN: 46.13 / MAX: 46.98MIN: 46.09 / MAX: 47.08MIN: 23.79 / MAX: 24.08MIN: 23.84 / MAX: 24.16MIN: 23.84 / MAX: 24.18MIN: 23.78 / MAX: 24.08MIN: 55.56 / MAX: 59.67MIN: 55.42 / MAX: 58.97MIN: 55.29 / MAX: 58.38

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonjihgfed112233445548.7148.7248.9124.9624.8924.8324.85MIN: 48.45 / MAX: 49.47MIN: 48.48 / MAX: 49.3MIN: 48.64 / MAX: 49.47MIN: 24.9 / MAX: 25.13MIN: 24.81 / MAX: 25.06MIN: 24.76 / MAX: 24.96MIN: 24.78 / MAX: 25

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objjihgfed102030405043.9043.7743.8422.2622.2722.2922.35MIN: 43.66 / MAX: 44.27MIN: 43.51 / MAX: 44.16MIN: 43.64 / MAX: 44.38MIN: 22.18 / MAX: 22.43MIN: 22.2 / MAX: 22.44MIN: 22.22 / MAX: 22.46MIN: 22.28 / MAX: 22.5

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownjihgfed102030405043.9343.8143.5721.8321.7721.9921.89MIN: 43.46 / MAX: 45.01MIN: 43.36 / MAX: 45.05MIN: 43.11 / MAX: 44.65MIN: 21.69 / MAX: 22.17MIN: 21.63 / MAX: 22.18MIN: 21.84 / MAX: 22.32MIN: 21.74 / MAX: 22.23

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonjihgfed122436486054.1554.1954.2227.9127.8327.8327.74MIN: 53.87 / MAX: 54.79MIN: 53.91 / MAX: 54.97MIN: 53.93 / MAX: 54.77MIN: 27.81 / MAX: 28.17MIN: 27.73 / MAX: 28.13MIN: 27.72 / MAX: 28.1MIN: 27.64 / MAX: 27.98

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objjihgfed102030405046.0246.0945.9223.7123.5023.5323.35MIN: 45.75 / MAX: 46.57MIN: 45.82 / MAX: 46.6MIN: 45.64 / MAX: 46.53MIN: 23.61 / MAX: 23.93MIN: 23.4 / MAX: 23.74MIN: 23.43 / MAX: 23.73MIN: 23.26 / MAX: 23.57

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownjihgfed102030405045.4045.7145.1922.4222.4422.3422.39MIN: 44.88 / MAX: 46.68MIN: 45.11 / MAX: 47.49MIN: 44.65 / MAX: 46.39MIN: 22.22 / MAX: 22.85MIN: 22.25 / MAX: 22.78MIN: 22.15 / MAX: 22.75MIN: 22.2 / MAX: 22.85

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyjihgfedcba0.41180.82361.23541.64722.0591.371.371.370.720.720.720.721.831.831.83

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyjihgfedcba0.4140.8281.2421.6562.071.381.381.380.720.720.720.721.821.841.84

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyjihgfedcba0.19580.39160.58740.78320.9790.650.650.650.340.340.340.340.870.860.86

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyjihgfed0.30830.61660.92491.23321.54151.371.371.370.720.720.720.72

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyjihgfed0.31050.6210.93151.2421.55251.381.381.380.720.720.720.72

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyjihgfed0.14630.29260.43890.58520.73150.650.650.650.340.340.340.34

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalarjihgfed80160240320400363363363191191190191MIN: 24 / MAX: 6613MIN: 24 / MAX: 6577MIN: 24 / MAX: 6610MIN: 13 / MAX: 3483MIN: 13 / MAX: 3484MIN: 13 / MAX: 3484MIN: 13 / MAX: 3471

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCjihgfed2004006008001000922922926489488487487MIN: 67 / MAX: 12356MIN: 67 / MAX: 12374MIN: 67 / MAX: 12416MIN: 36 / MAX: 6969MIN: 36 / MAX: 6952MIN: 36 / MAX: 6956MIN: 36 / MAX: 6949

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timejihgfedcba4812162010.6932010.7430010.798805.575535.573205.541075.5746915.9872015.9785015.98600

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timejihgfedcba4812162010.7664010.7904010.783205.565395.555815.563535.5700115.9778015.9888015.95280

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timejihgfedcba50100150200250192.70192.40192.65151.68151.78151.51151.91214.14214.07215.10

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timejihgfedcba4812162010.8743010.8829010.852405.622785.614545.620405.6074714.1399014.1783014.23690

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timejihgfedcba4812162010.5740010.5848010.591305.477255.452275.461535.4532913.8317013.7666013.87390

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timejihgfedcba4812162012.5104012.4987012.543806.600856.595636.582706.5874516.5350016.4365016.34680

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32jihgfedcba8M16M24M32M40M371200003714100037145000352360003527100035315000352280003945300039486000394990001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57jihgfedcba13M26M39M52M65M557150005584100051443000528540005287900052827000526650005751900059296000594010001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32jihgfedcba17M34M51M68M85M724000007249100072468000686780006886100068846000670540007692400077019000771810001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57jihgfedcba30M60M90M120M150M1113700001097000001052800001048000001057400001054800001056500001185500001140100001174900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240jihgfed0.37280.74561.11841.49121.8641.2451.2881.2841.6481.6571.6541.6571. (CXX) g++ options: -O3 -fopenmp

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200jihgfed91827364525.5626.3926.1137.9538.0238.0738.111. (CXX) g++ options: -O3 -fopenmp

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400jihgfed2040608010068.5268.2468.5797.5397.9999.4298.981. (CXX) g++ options: -O3 -fopenmp

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32jihgfedcba30M60M90M120M150M1457400001459200001458200001384600001385800001386200001386000001536700001536900001538500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57jihgfedcba40M80M120M160M200M2001200001986400002009300001907500001898800001912300001889300001945100001965900001962200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32jihgfedcba70M140M210M280M350M2924900002924300002926200002774100002763900002777800002780300003067600003051100003075400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57jihgfedcba80M160M240M320M400M3816600003772500003783300003578100003504500003579900003633100003669900003669300003694300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512jihgfedcba3M6M9M12M15M128990001336300012909000122560001268100012366000126830001422500014021000139090001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32jihgfedcba130M260M390M520M650M5856700005838700005858500005430500005450200005451400005453600006036500006024700005942300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57jihgfedcba160M320M480M640M800M7371200007358000007414200006820700006933400006929200006891500006749300006927600006997400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512jihgfedcba6M12M18M24M30M263780002591000025648000227270002519900025207000246270002822700027736000279010001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32jihgfedcba300M600M900M1200M1500M11699000001172900000117240000010471000001041900000104660000010471000001184800000119030000011835000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57jihgfedcba300M600M900M1200M1500M13698000001394100000135290000010334000001024600000103200000010350000001254800000121420000011921000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512jihgfedcba12M24M36M48M60M497810005212900052890000495560004997700050380000502580005516500055588000529110001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32jihgfedcba500M1000M1500M2000M2500M20565000002052600000205750000010562000001057100000105750000010595000002206800000221210000022077000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57jihgfedcba400M800M1200M1600M2000M19223000001899700000191680000010993000001094600000109540000010933000002010300000200190000019944000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512jihgfedcba20M40M60M80M100M1042200001047800001044400001001700009944100097005000995940001091400001080800001098700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 32jihgfedcba600M1200M1800M2400M3000M20710000002069900000206830000010657000001065300000106510000010652000002999800000299540000030058000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 57jihgfedcba600M1200M1800M2400M3000M19995000001997000000199550000011182000001120500000111780000011208000002564900000257110000025598000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512jihgfedcba50M100M150M200M250M2094900002097200002075900001946700001945000001960400001938500002149100002161500002160800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512jihgfedcba90M180M270M360M450M3932700003932000003916800002740700002733900002734800002737600004244000004296200004258100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512jihgfedcba130M260M390M520M650M5110700005123100005120200002817300002830300002818300002829200006226300006109500006225600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 512jihgfedcba150M300M450M600M750M5194400005201400005197000002865300002859200002858800002862500007150300007181400007116400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000jihgfedcba140K280K420K560K700K617284581395645161546448578035552486578035401606469484460829

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000jihgfedcba120K240K360K480K600K370370555556526316460829523560294985529101403226404858420168

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000jihgfedcba300K600K900K1200K1500K12515642536781233046654022122100125100427831968399510204081126126

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000jihgfedcba13K26K39K52K65K59347589975872060680583435861758617439374128843649

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000jihgfedcba20K40K60K80K100K9560210224910040210395096993100604101010905807380191075

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000jihgfedcba300K600K900K1200K1500K2891013829953237291107420130378112048191248439185874173822215332

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000jihgfedcba20K40K60K80K100K89206908278305682237816338223782372771017304670522

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000jihgfedcba13K26K39K52K65K58005590326049658928593825882457971350753742540733

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000jihgfedcba16K32K48K64K80K73153724647341172706699207089772134522605211953665

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 100000jihgfedcba20K40K60K80K100K973711020411052631025649940498039105708734759082787566

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000jihgfedcba20K40K60K80K100K113572110436113404110828111198113327111012901479731498932

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 100000jihgfedcba20K40K60K80K100K83472823727980880386794918382282102671596934875529

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 1000000jihgfedcba20K40K60K80K100K88550863118512184810825018404183921746387167973239

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000jihgfedcba16K32K48K64K80K72754720107361170922705377005771296440014443746145

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 1000000jihgfedcba20K40K60K80K100K110693114692111782113895110803113225112613970318671590114

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 1000000jihgfedcba20K40K60K80K100K87161852158431085763858158436081208668277212973078

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 100000jihgfedcba200K400K600K800K1000K751880869565925926561798709220389105632911657895862069529101

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 100000jihgfedcba200K400K600K800K1000K684932847458595238487805478469613497591716729927458716515464

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 1000000jihgfedcba500K1000K1500K2000K2500K1941748193050242643920366601795332320924181818228425219417482173913

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 1000000jihgfedcba500K1000K1500K2000K2500K558036250626623529412049180196463723562760060118939391619701886792

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesjihgfedcba70K140K210K280K350K316485318109324575197092196287195798197866270480256661248095

207 Results Shown

SPECFEM3D:
  Tomographic Model
  Homogeneous Halfspace
  Mount St. Helens
  Layered Halfspace
BRL-CAD
Remhos
QuantLib
SPECFEM3D
QuantLib
nekRS:
  Kershaw
  TurboPipe Periodic
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
TiDB Community Server
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
TiDB Community Server
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
oneDNN
TiDB Community Server
oneDNN
TiDB Community Server:
  oltp_update_non_index - 1
  oltp_update_non_index - 32
oneDNN
TiDB Community Server
oneDNN:
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 1D - bf16bf16bf16 - CPU
TiDB Community Server:
  oltp_update_non_index - 128
  oltp_read_write - 1
oneDNN
TiDB Community Server:
  oltp_read_write - 16
  oltp_point_select - 64
  oltp_point_select - 32
  oltp_point_select - 16
  oltp_update_index - 1
oneDNN
TiDB Community Server
oneDNN
TiDB Community Server
oneDNN
TiDB Community Server
oneDNN
TiDB Community Server:
  oltp_update_non_index - 64
  oltp_read_write - 128
  oltp_point_select - 1
oneDNN
TiDB Community Server
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
OpenVINO:
  Face Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP32 - CPU:
    FPS
    ms
  Vehicle Detection FP16 - CPU:
    FPS
    ms
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
  Face Detection Retail FP16 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16 - CPU:
    FPS
    ms
  Vehicle Detection FP16-INT8 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
  Face Detection Retail FP16-INT8 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16-INT8 - CPU:
    FPS
    ms
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16-INT8 - CPU:
    FPS
    ms
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
  Handwritten English Recognition FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16 - CPU:
    FPS
    ms
  Handwritten English Recognition FP16-INT8 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
    ms
Kripke
Timed Linux Kernel Compilation
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
Embree:
  Pathtracer - Asian Dragon
  Pathtracer - Asian Dragon Obj
  Pathtracer - Crown
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
  Pathtracer ISPC - Crown
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
  RTLightmap.hdr.4096x4096 - CPU-Only
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
  RTLightmap.hdr.4096x4096 - CPU-Only
OpenVKL:
  vklBenchmarkCPU Scalar
  vklBenchmarkCPU ISPC
OSPRay:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
  particle_volume/pathtracer/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
Liquid-DSP:
  1 - 256 - 32
  1 - 256 - 57
  2 - 256 - 32
  2 - 256 - 57
easyWave:
  e2Asean Grid + BengkuluSept2007 Source - 240
  e2Asean Grid + BengkuluSept2007 Source - 1200
  e2Asean Grid + BengkuluSept2007 Source - 2400
Liquid-DSP:
  4 - 256 - 32
  4 - 256 - 57
  8 - 256 - 32
  8 - 256 - 57
  1 - 256 - 512
  16 - 256 - 32
  16 - 256 - 57
  2 - 256 - 512
  32 - 256 - 32
  32 - 256 - 57
  4 - 256 - 512
  64 - 256 - 32
  64 - 256 - 57
  8 - 256 - 512
  96 - 256 - 32
  96 - 256 - 57
  16 - 256 - 512
  32 - 256 - 512
  64 - 256 - 512
  96 - 256 - 512
Apache Hadoop:
  Open - 50 - 100000
  Open - 100 - 100000
  Open - 50 - 1000000
  Create - 50 - 100000
  Delete - 50 - 100000
  Open - 100 - 1000000
  Rename - 50 - 100000
  Create - 100 - 100000
  Create - 50 - 1000000
  Delete - 100 - 100000
  Delete - 50 - 1000000
  Rename - 100 - 100000
  Rename - 50 - 1000000
  Create - 100 - 1000000
  Delete - 100 - 1000000
  Rename - 100 - 1000000
  File Status - 50 - 100000
  File Status - 100 - 100000
  File Status - 50 - 1000000
  File Status - 100 - 1000000
Apache Cassandra