extra tests2

Tests for a future article. AMD EPYC 9124 16-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2310228-NE-EXTRATEST37&export=txt&grs&rdt&rro.

extra tests2ProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen Resolutionabcdefg2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads)Supermicro H13DSH (1.5 BIOS)24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbAlmaLinux 9.25.14.0-284.25.1.el9_2.x86_64 (x86_64)GCC 11.3.1 20221121ext41024x768AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads)Supermicro H13SSW (1.1 BIOS)12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123NOpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- a: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- e: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- f: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- g: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Details- OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Details- Python 3.9.16Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

extra tests2hadoop: File Status - 50 - 1000000hadoop: Open - 100 - 1000000hadoop: Open - 50 - 1000000deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamopenvino: Handwritten English Recognition FP16-INT8 - CPUdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamopenvino: Handwritten English Recognition FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamliquid-dsp: 96 - 256 - 32deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamopenvino: Face Detection FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamblender: Pabellon Barcelona - CPU-Onlyopenvino: Face Detection Retail FP16-INT8 - CPUblender: Classroom - CPU-Onlyblender: BMW27 - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Detection FP32 - CPUspecfem3d: Layered Halfspaceopenvino: Person Detection FP16 - CPUblender: Barbershop - CPU-Onlydeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streambrl-cad: VGR Performance Metricembree: Pathtracer - Crownopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Vehicle Detection FP16 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyospray: gravity_spheres_volume/dim_512/scivis/real_timeoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyospray: gravity_spheres_volume/dim_512/ao/real_timeembree: Pathtracer ISPC - Crownspecfem3d: Mount St. Helensliquid-dsp: 96 - 256 - 512ospray: gravity_spheres_volume/dim_512/pathtracer/real_timespecfem3d: Homogeneous Halfspaceopenvino: Vehicle Detection FP16-INT8 - CPUembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objopenvino: Person Vehicle Bike Detection FP16 - CPUembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonspecfem3d: Water-layered Halfspaceopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUspecfem3d: Tomographic Modelliquid-dsp: 96 - 256 - 57openvino: Road Segmentation ADAS FP16 - CPUhadoop: File Status - 50 - 100000liquid-dsp: 64 - 256 - 512liquid-dsp: 64 - 256 - 32build-linux-kernel: defconfighadoop: File Status - 100 - 1000000openvino: Face Detection FP16 - CPUremhos: Sample Remap Exampleopenvino: Face Detection FP16-INT8 - CPUliquid-dsp: 64 - 256 - 57hadoop: Open - 100 - 100000openvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Vehicle Detection FP16 - CPUhadoop: Create - 100 - 100000openvino: Vehicle Detection FP16-INT8 - CPUhadoop: Create - 100 - 1000000openvino: Person Vehicle Bike Detection FP16 - CPUhadoop: File Status - 100 - 100000liquid-dsp: 32 - 256 - 512openvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUtidb: oltp_read_write - 128tidb: oltp_read_write - 64openvino: Road Segmentation ADAS FP16 - CPUhadoop: Create - 50 - 100000hadoop: Open - 50 - 100000hadoop: Delete - 100 - 100000ospray: particle_volume/pathtracer/real_timehadoop: Delete - 50 - 100000hadoop: Create - 50 - 1000000cassandra: Writestidb: oltp_point_select - 1svt-av1: Preset 8 - Bosphorus 4Ktidb: oltp_read_write - 32hadoop: Delete - 100 - 1000000tidb: oltp_update_non_index - 1tidb: oltp_read_write - 1hadoop: Rename - 100 - 1000000tidb: oltp_update_non_index - 128svt-av1: Preset 4 - Bosphorus 4Khadoop: Delete - 50 - 1000000tidb: oltp_update_index - 1svt-av1: Preset 12 - Bosphorus 1080phadoop: Rename - 100 - 100000liquid-dsp: 2 - 256 - 512tidb: oltp_point_select - 128liquid-dsp: 32 - 256 - 57tidb: oltp_update_non_index - 64svt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080phadoop: Rename - 50 - 1000000nekrs: TurboPipe Periodicsvt-av1: Preset 4 - Bosphorus 1080phadoop: Rename - 50 - 100000liquid-dsp: 1 - 256 - 512tidb: oltp_update_index - 64deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamliquid-dsp: 2 - 256 - 32onednn: IP Shapes 1D - bf16bf16bf16 - CPUliquid-dsp: 32 - 256 - 32liquid-dsp: 8 - 256 - 512liquid-dsp: 2 - 256 - 57tidb: oltp_point_select - 64liquid-dsp: 1 - 256 - 57nekrs: Kershawliquid-dsp: 4 - 256 - 512liquid-dsp: 1 - 256 - 32tidb: oltp_update_index - 128liquid-dsp: 16 - 256 - 512liquid-dsp: 8 - 256 - 32liquid-dsp: 16 - 256 - 32liquid-dsp: 4 - 256 - 32openvino: Age Gender Recognition Retail 0013 FP16 - CPUtidb: oltp_update_non_index - 32tidb: oltp_point_select - 32openvino: Handwritten English Recognition FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamopenvino: Face Detection Retail FP16-INT8 - CPUdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamtidb: oltp_point_select - 16tidb: oltp_update_index - 32deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamonednn: IP Shapes 3D - u8s8f32 - CPUtidb: oltp_read_write - 16deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonednn: IP Shapes 3D - f32 - CPUopenvino: Weld Porosity Detection FP16 - CPUliquid-dsp: 8 - 256 - 57deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUliquid-dsp: 4 - 256 - 57svt-av1: Preset 12 - Bosphorus 4Ktidb: oltp_update_non_index - 16liquid-dsp: 16 - 256 - 57svt-av1: Preset 13 - Bosphorus 4Konednn: IP Shapes 3D - bf16bf16bf16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonednn: IP Shapes 1D - f32 - CPUkripke: easywave: e2Asean Grid + BengkuluSept2007 Source - 2400deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragon Objtidb: oltp_update_index - 16embree: Pathtracer - Crownonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - f32 - CPUdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragoneasywave: e2Asean Grid + BengkuluSept2007 Source - 240embree: Pathtracer - Asian Dragonopenvkl: vklBenchmarkCPU Scalardeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragon Objopenvkl: vklBenchmarkCPU ISPCeasywave: e2Asean Grid + BengkuluSept2007 Source - 1200onednn: Recurrent Neural Network Training - f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyopenradioss: Bumper Beamabcdefg217391321533211261265137.01141244.6949.325839.4951218.146439.4391489.1203485.6725215.6383718.9189158.924322.25051560.0330.415776.9415.98615.95282945.26201.3925300580000068.598856.011417.070680.549837.5866.4226.2120606.3833.2286884.64283.9726.885983804282.55254.88672.463577216254.9017317.222033.170.861.8413.87391.8314.236956.087111.02476577571164000016.346815.105117732873.2460.144953.57332454.0956.485367.337826.9850209085882.91842.9112.3129466522559800000748.44529101622560000220770000027.3541886792393.616.346213.94199440000042016842.2442.4437.85.89407334.17461454.885154644258100002.0314.23857577909016.024364946082987566215.0969107553665248095433190.81158974901141328254073078511055.203989321212422.9947552927901000159242119210000041281141.219510.36173239676771000012.477705221390900035.62877181000118350000010987000011749000012756759401000111069000005291100039499000270872160800003075400005942300001538500000.542873510462738.516.90744.864.650818361347.661238331118.750916.26369430000150.5897485.71758.2830.72196220000163.45918095699740000163.0130.3474.321112558109.802749.365333.341249.0092605.758605.0388111.0131194174817382210204085138.83411239.6749.166639.4681219.526339.4539489.4464488.1264215.9254717.9693159.0596321.18291546.0230.445780.4415.978515.98882986.46201.2528299540000068.661756.061403.06580.769849.0766.6426.24120728.2233.1787359.23284.9928.65210863284.22255.3672.373476851755.3925317.282028.010.861.8413.76661.8314.178356.455111.31849570971814000016.436514.460586982880.5859.912453.81352450.2656.690267.195129.4607611975836.27854.5112.1005959472571100000750.49862069610950000221210000027.241161970393.2316.791213.62200190000040485842.0942.237.795.91374254.16444374.894587164296200002.0514.03890998018315.984128846948490827214.0747380152119256661440591.322615208671513122510721295.14997314427.6866934827736000159728121420000039759138.338542.61171679675736000012.59173046140210002437135.642277019000119030000010808000011401000013080259296000112403000005558800039486000274642161500003051100006024700001536900000.542891410618038.6617.06694.854.64766751517817347.218936950118.948216.02366930000150.6055487.35998.2731196590000166.37818068692760000166.6920.3474.5553109.233249.108233.379848.9703606.6693605.7307110.91522842521858746839955153.66441237.2947.153539.4464218.516239.4183487.0522489.1106215.647716.1404164.6131321.50821551.6330.435802.6515.987215.97782987.33201.5402299980000068.628756.021418.904180.419845.2766.7226.12123484.2833.0386789.8284.3127.490850157282.67254.72671.255476252955.4037317.332029.790.871.8213.83171.8314.139956.807811.3273597771503000016.53514.8082733652881.1459.792953.69272455.5156.932767.503827.0602350795840.53849.312.0409178772564900000757.38657895622630000220680000027.4081893939393.3716.243213.79201030000040322642.1942.4337.795.9350754.16440014.887299274244000002.0514.127846915.834393740160673475214.1369058052260270480447190.41759630970311381248566827528655.049901471189431.8956715928227000149962125480000039106143.545516.90674638675417000012.61777101142250002332435.682576924000118480000010914000011855000057519000108267000005516500039453000265462149100003067600006036500001536700000.5438.7516.88634.864.63486540617565347.372837368118.780116.02366990000145.2562507.47868.2430.89194510000163.055674930000161.4950.3474.503112681109.584749.017333.463749.2055605.8765605.9183111.025181818212484392783191599.2079395.6616.164813.069472.45513.1312163.5559162.850271.9189240.552955.6132108.909532.5910.472013.775.574695.570011039.6171.137106520000024.483320.03508.087224.153540.88182.997244958.0790.0332002.62106.971.614294327107.02670.87257.272829806421.4811124.12797.640.340.725.453290.725.6074722.58526.7364141182862500006.5874535.5716849081175.6724.687222.26141036.9923.873328.364362.4417495852564.78370.5727.3309855881120800000344.67632911282920000105950000055.173600601761.5930.761398.52109330000052910174.8174.7164.4110.01579716.79712967.75917162737600003.1121.57597275533423.258617578035105708151.90510101072134197866589866.988469771126131693812084.1071110121479526.2168210224627000129492103500000034224118.946604.98683921793457000010.9182372126830002110831.0525670540001.03749104710000099594000105650000115675526650001031890000050258000352280001938500002780300005453600001386000000.49262739814940.415.72464.514.99617612325.87630.6039536480112.24991.2575815.36363310000143.7643493.5967.9330.02188930000163.18918563689150000161.8541.028750.352.4940824099450098.9873.3058849.163838.5161.558240.65225923.35311262221.89430.6282360.847805110.1114847.383.8157649.086333.22422.1333248.87291642.5127.74381.65724.845191606.5773606.1011.3378922.392222.351748738.1051641.923.059911.91374111.10813.377821643.990.340.720.7232092412048192510041599.1543432.3216.139212.943372.657113.1187162.9298163.136171.9146240.234955.4634109.0938530.9910.472007.535.541075.563531039.8271.2727106510000024.472520511.4098224.13544.18182.5671.4444933.2790.3132032.06107.2470.189028506107.27670.64257.89429612521.4357123.61793.750.340.725.461530.725.620422.569426.7991434462858800006.582735.0301348891174.624.734322.15771028.6423.939328.314162.3251468282562.54373.6427.4598213081117800000342.81389105281830000105750000055.093235627761.1630.845398.91109540000029498574.5474.564.6810.06588246.8700577.776134972734800003.1121.4601455389323.325861755248698039151.50610060470897195798597667.721467371132251708320984360421384.1141133271490525.1738382225207000129904103200000033881119.307597.01184041793101000010.98482237123660002127130.9867688460001.1443210466000009700500010548000011865752827000102640000005038000035315000246111960400002777800005451400001386200000.49262859690736.9815.62454.514.98597025017117325.74160.57579436784112.05741.2804315.36357990000144.1013494.25757.9630.1191230000162.60818557692920000162.0511.054250.352.5652223624390099.41573.2609851.659849.7121.549110.6576123.52831256721.99090.6339750.844434109.9654841.0783.8442149.009433.26252.125749.05981639.3627.82931.65424.8282190606.755607.9131.3386122.342222.291148738.06716413.06371.91781111.08883.384361643.970.340.720.721795332130378112210011600.5275431.9416.130713.085372.573613.0934162.8976162.929471.9401240.164255.5428109.09533.7410.482004.765.57325.555811038.4771.044106530000024.523820.01508.2088223.953548.78181.771.9644968.4390.2631951.64106.7670.542255905107.39667.87257.504629560321.5913124.3791.740.340.725.452270.725.6145422.656626.8731684552859200006.5956335.5350730011180.8524.704722.1491041.8723.935428.323761.2817691242539.97369.2626.9737573951120500000343.49709220283030000105710000055.1481964637760.5730.725399.24109460000052356074.8774.4364.3110.09593826.76705377.674784692733900003.1421.65603105495623.285834357803599404151.789699369920196287595467.393471411108031697321885815414244.1381111981483521.5187949125199000130389102460000034470118.486585.36882501795579000010.73681633126810002106731.0292688610001.001361041900000994410001057400001190925287900099764500004997700035271000248301945000002763900005450200001385800000.49266959736837.0115.7214.54.987770105324.95680.60083436125112.4131.2065315.38350450000143.6922494.22117.9729.95189880000161.847693340000160.7981.061440.352.4971423659100097.98773.2163849.344851.4941.572820.65318223.50421269221.77460.6303250.850691110.0001845.3083.8182349.071233.27512.1306249.07141636.4427.8261.65724.887191606.7852607.81711.3418322.440722.267648838.0151636.763.056741.91422110.88523.379561642.350.340.720.72203666011074206540221602.5221432.216.067213.07372.687913.0596163.2282162.993771.9003239.517655.4264109.2191538.0110.482006.095.575535.565391039.3770.9275106570000024.460720.05509.139224.123533.64183.2972.0145097.9990.6332008.03107.2469.955609165107.04669.09257.280829552221.5847123.41793.90.340.725.477250.725.6227822.774527.6966313712865300006.6008535.3786000211175.5824.819322.19011031.623.879628.479362.8109243762557.66372.2627.7464751621118200000341.36561798281730000105620000055.1722049180759.9230.75398.13109930000046082974.5874.7164.7710.06589286.79709227.744878052740700003.1221.47599445530123.4260680546448102564151.6811039507270619709267.811469931138951705319585763416954.1431108281481528.5338038622727000103340000034107118.481586.74884810796491000011.016822371225600031.0574686780001.12723104710000010017000010480000011854952854000105006000004955600035236000245741946700002774100005430500001384600000.499684036.9815.68924.524.97876992317135325.50750.6123236088112.47791.2791815.37357810000144.1053495.60337.9629.72190750000160.32218735682070000161.3241.045670.352.5144123717570097.52973.1867837.595848.0321.551180.647723.70911262721.83050.6291080.843492109.9037847.4173.8238149.025933.36742.1181348.98371631.9927.911.64824.9619191608.7163607.16281.3356422.421522.255948937.951637.373.054581.91274110.97593.381561641.40.340.720.72OpenBenchmarking.org

Apache Hadoop

Operation: File Status - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 1000000gfedcba500K1000K1500K2000K2500K20366601795332320924181818228425219417482173913

Apache Hadoop

Operation: Open - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000gfedcba300K600K900K1200K1500K1107420130378112048191248439185874173822215332

Apache Hadoop

Operation: Open - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000gfedcba300K600K900K1200K1500K654022122100125100427831968399510204081126126

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba110022003300440055001602.521600.531599.151599.215153.665138.835137.01

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUgfedcba30060090012001500432.20431.94432.32395.661237.291239.671244.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamgfedcba112233445516.0716.1316.1416.1647.1549.1749.33

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamgfedcba91827364513.0713.0912.9413.0739.4539.4739.50

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba5010015020025072.6972.5772.6672.46218.52219.53218.15

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamgfedcba91827364513.0613.0913.1213.1339.4239.4539.44

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamgfedcba110220330440550163.23162.90162.93163.56487.05489.45489.12

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamgfedcba110220330440550162.99162.93163.14162.85489.11488.13485.67

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamgfedcba5010015020025071.9071.9471.9171.92215.65215.93215.64

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba160320480640800239.52240.16240.23240.55716.14717.97718.92

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamgfedcba408012016020055.4355.5455.4655.61164.61159.06158.92

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamgfedcba70140210280350109.22109.09109.09108.91321.51321.18322.25

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUgfedcba30060090012001500538.01533.74530.99532.591551.631546.021560.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUgfedcba71421283510.4810.4810.4710.4730.4330.4430.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUgfedcba120024003600480060002006.092004.762007.532013.775802.655780.445776.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timegfedcba481216205.575535.573205.541075.5746915.9872015.9785015.98600

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timegfedcba481216205.565395.555815.563535.5700115.9778015.9888015.95280

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUgfedcba60012001800240030001039.371038.471039.821039.612987.332986.462945.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamgfedcba408012016020070.9371.0471.2771.14201.54201.25201.39

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 32gfedcba600M1200M1800M2400M3000M10657000001065300000106510000010652000002999800000299540000030058000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamgfedcba153045607524.4624.5224.4724.4868.6368.6668.60

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUgfedcba132639526520.0520.0120.0020.0356.0256.0656.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba30060090012001500509.14508.21511.41508.091418.901403.071417.07

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlygfedcba50100150200250224.12223.95224.10224.1580.4180.7680.54

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUgfedcba2K4K6K8K10K3533.643548.783544.183540.889845.279849.079837.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlygfedcba4080120160200183.29181.70182.56182.9966.7266.6466.42

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlygfedcba163248648072.0171.9671.4472.0026.1226.2426.20

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUgfedcba30K60K90K120K150K45097.9944968.4344933.2744958.07123484.28120728.22120606.381. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlygfedcba2040608010090.6390.2690.3190.0333.0333.1733.22

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUgfedcba20K40K60K80K100K32008.0331951.6432032.0632002.6286789.8087359.2386884.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUgfedcba60120180240300107.24106.76107.24106.90284.31284.99283.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspacegfedcba163248648069.9670.5470.1971.6127.4928.6526.891. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUgfedcba60120180240300107.04107.39107.27107.02282.67284.22282.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlygfedcba140280420560700669.09667.87670.64670.87254.72255.30254.88

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamgfedcba150300450600750257.28257.50257.89257.27671.26672.37672.46

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricgfedcba170K340K510K680K850K2955222956032961252980647625297685177721621. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crowngfedcba122436486021.5821.5921.4421.4855.4055.3954.90MIN: 21.43 / MAX: 21.89MIN: 21.45 / MAX: 21.84MIN: 21.3 / MAX: 21.78MIN: 21.32 / MAX: 21.8MIN: 53.71 / MAX: 58.99MIN: 54.02 / MAX: 57.64MIN: 53.27 / MAX: 57.28

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUgfedcba70140210280350123.41124.30123.61124.12317.33317.28317.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUgfedcba400800120016002000793.90791.74793.75797.642029.792028.012033.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlygfedcba0.19580.39160.58740.78320.9790.340.340.340.340.870.860.86

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlygfedcba0.4140.8281.2421.6562.070.720.720.720.721.821.841.84

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timegfedcba481216205.477255.452275.461535.4532913.8317013.7666013.87390

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlygfedcba0.41180.82361.23541.64722.0590.720.720.720.721.831.831.83

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timegfedcba481216205.622785.614545.620405.6074714.1399014.1783014.23690

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crowngfedcba132639526522.7722.6622.5722.5956.8156.4656.09MIN: 22.57 / MAX: 23.16MIN: 22.45 / MAX: 22.99MIN: 22.39 / MAX: 22.93MIN: 22.39 / MAX: 22.98MIN: 55.27 / MAX: 59.91MIN: 54.53 / MAX: 59.89MIN: 54.05 / MAX: 59.82

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensgfedcba71421283527.7026.8726.8026.7411.3311.3211.021. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 512gfedcba150M300M450M600M750M2865300002859200002858800002862500007150300007181400007116400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timegfedcba481216206.600856.595636.582706.5874516.5350016.4365016.34680

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacegfedcba81624324035.3835.5435.0335.5714.8114.4615.111. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUgfedcba60012001800240030001175.581180.851174.601175.672881.142880.582873.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragongfedcba132639526524.8224.7024.7324.6959.7959.9160.14MIN: 24.74 / MAX: 25MIN: 24.63 / MAX: 24.84MIN: 24.67 / MAX: 24.86MIN: 24.62 / MAX: 24.84MIN: 58.46 / MAX: 62.03MIN: 58.66 / MAX: 61.96MIN: 58.97 / MAX: 62

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objgfedcba122436486022.1922.1522.1622.2653.6953.8153.57MIN: 22.12 / MAX: 22.33MIN: 22.07 / MAX: 22.32MIN: 22.08 / MAX: 22.35MIN: 22.18 / MAX: 22.42MIN: 52.63 / MAX: 55.24MIN: 52.72 / MAX: 55.86MIN: 52.17 / MAX: 55.38

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUgfedcba50010001500200025001031.601041.871028.641036.992455.512450.262454.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objgfedcba132639526523.8823.9423.9423.8756.9356.6956.49MIN: 23.79 / MAX: 24.08MIN: 23.84 / MAX: 24.16MIN: 23.84 / MAX: 24.18MIN: 23.78 / MAX: 24.08MIN: 55.56 / MAX: 59.67MIN: 55.42 / MAX: 58.97MIN: 55.29 / MAX: 58.38

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragongfedcba153045607528.4828.3228.3128.3667.5067.2067.34MIN: 28.37 / MAX: 28.69MIN: 28.23 / MAX: 28.55MIN: 28.21 / MAX: 28.56MIN: 28.26 / MAX: 28.59MIN: 65.64 / MAX: 71.17MIN: 65.48 / MAX: 70.41MIN: 65.61 / MAX: 70.54

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspacegfedcba142842567062.8161.2862.3362.4427.0629.4626.991. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUgfedcba130026003900520065002557.662539.972562.542564.785840.535836.275882.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUgfedcba2004006008001000372.26369.26373.64370.57849.30854.51842.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelgfedcba71421283527.7526.9727.4627.3312.0412.1012.311. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 57gfedcba600M1200M1800M2400M3000M11182000001120500000111780000011208000002564900000257110000025598000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUgfedcba160320480640800341.36343.49342.81344.67757.38750.49748.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: File Status - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 100000gfedcba200K400K600K800K1000K561798709220389105632911657895862069529101

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512gfedcba130M260M390M520M650M2817300002830300002818300002829200006226300006109500006225600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32gfedcba500M1000M1500M2000M2500M10562000001057100000105750000010595000002206800000221210000022077000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfiggfedcba122436486055.1755.1555.0955.1727.4127.2427.35

Apache Hadoop

Operation: File Status - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 1000000gfedcba400K800K1200K1600K2000K2049180196463723562760060118939391619701886792

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUgfedcba160320480640800759.92760.57761.16761.59393.37393.23393.60MIN: 737.63 / MAX: 771.07MIN: 741.4 / MAX: 770.88MIN: 741.99 / MAX: 776.56MIN: 738.34 / MAX: 772.36MIN: 362.57 / MAX: 433.51MIN: 360.87 / MAX: 433.13MIN: 363.29 / MAX: 431.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Remhos

Test: Sample Remap Example

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Examplegfedcba71421283530.7530.7330.8530.7616.2416.7916.351. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUgfedcba90180270360450398.13399.24398.91398.52213.79213.62213.94MIN: 379.09 / MAX: 404.71MIN: 387.9 / MAX: 408.93MIN: 386.2 / MAX: 407.29MIN: 382.1 / MAX: 404.98MIN: 197.29 / MAX: 236.32MIN: 197.2 / MAX: 235.23MIN: 201.64 / MAX: 242.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57gfedcba400M800M1200M1600M2000M10993000001094600000109540000010933000002010300000200190000019944000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

Operation: Open - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000gfedcba110K220K330K440K550K460829523560294985529101403226404858420168

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUgfedcba2040608010074.5874.8774.5474.8142.1942.0942.24MIN: 67.63 / MAX: 78.73MIN: 66.72 / MAX: 80.96MIN: 65.97 / MAX: 82.9MIN: 66.88 / MAX: 80.7MIN: 36.21 / MAX: 65.64MIN: 37.13 / MAX: 58.71MIN: 36.59 / MAX: 61.561. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUgfedcba2040608010074.7174.4374.5074.7142.4342.2042.44MIN: 66.29 / MAX: 79.68MIN: 65.68 / MAX: 83.49MIN: 66.5 / MAX: 80.32MIN: 66.12 / MAX: 81.09MIN: 36.31 / MAX: 62.36MIN: 36.84 / MAX: 61.97MIN: 36.14 / MAX: 61.981. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUgfedcba142842567064.7764.3164.6864.4137.7937.7937.80MIN: 55.8 / MAX: 69.46MIN: 50.85 / MAX: 70.77MIN: 38.02 / MAX: 72.52MIN: 37.44 / MAX: 73.04MIN: 33.29 / MAX: 54.88MIN: 32.97 / MAX: 53.7MIN: 33.35 / MAX: 56.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUgfedcba369121510.0610.0910.0610.015.905.915.89MIN: 5.2 / MAX: 19.38MIN: 5.4 / MAX: 19.17MIN: 5.29 / MAX: 19.07MIN: 5.7 / MAX: 19.52MIN: 4.83 / MAX: 13.4MIN: 4.84 / MAX: 12.9MIN: 4.67 / MAX: 18.41. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000gfedcba13K26K39K52K65K58928593825882457971350753742540733

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUgfedcba2468106.796.766.806.794.164.164.17MIN: 3.79 / MAX: 15.41MIN: 4.04 / MAX: 15.47MIN: 4.04 / MAX: 15.37MIN: 3.8 / MAX: 15.48MIN: 3.43 / MAX: 10.26MIN: 3.42 / MAX: 11.2MIN: 3.39 / MAX: 10.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000gfedcba15K30K45K60K75K70922705377005771296440014443746145

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUgfedcba2468107.747.677.777.704.884.894.88MIN: 6.06 / MAX: 12.66MIN: 5.32 / MAX: 16.6MIN: 5.42 / MAX: 16.35MIN: 5.51 / MAX: 16.06MIN: 3.9 / MAX: 14.94MIN: 3.93 / MAX: 13.44MIN: 3.95 / MAX: 16.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: File Status - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 100000gfedcba160K320K480K640K800K487805478469613497591716729927458716515464

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512gfedcba90M180M270M360M450M2740700002733900002734800002737600004244000004296200004258100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUgfedcba0.70651.4132.11952.8263.53253.123.143.113.112.052.052.03MIN: 1.88 / MAX: 11.92MIN: 1.93 / MAX: 11.65MIN: 1.93 / MAX: 9.72MIN: 1.94 / MAX: 11.57MIN: 1.62 / MAX: 6.96MIN: 1.6 / MAX: 7MIN: 1.66 / MAX: 7.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUgfedcba51015202521.4721.6521.4021.5714.1214.0314.23MIN: 17.62 / MAX: 28.13MIN: 19.48 / MAX: 24.27MIN: 19.07 / MAX: 25.3MIN: 19.5 / MAX: 24.76MIN: 11.51 / MAX: 26.04MIN: 11.59 / MAX: 26.04MIN: 11.51 / MAX: 25.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TiDB Community Server

Test: oltp_read_write - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 128gfedba20K40K60K80K100K599446031060145597278909985757

TiDB Community Server

Test: oltp_read_write - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 64gfedcba20K40K60K80K100K55301549565389355334784698018379090

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUgfedcba61218243023.4223.2823.3223.2015.8315.9816.02MIN: 20.46 / MAX: 32.43MIN: 15.73 / MAX: 30.77MIN: 19.49 / MAX: 30.99MIN: 15.1 / MAX: 31.6MIN: 12.38 / MAX: 32.97MIN: 12.74 / MAX: 33.34MIN: 12.5 / MAX: 33.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000gfedcba13K26K39K52K65K60680583435861758617439374128843649

Apache Hadoop

Operation: Open - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000gfedcba120K240K360K480K600K546448578035552486578035401606469484460829

Apache Hadoop

Operation: Delete - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 100000gfedcba20K40K60K80K100K1025649940498039105708734759082787566

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timegfedcba50100150200250151.68151.78151.51151.91214.14214.07215.10

Apache Hadoop

Operation: Delete - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000gfedcba20K40K60K80K100K10395096993100604101010905807380191075

Apache Hadoop

Operation: Create - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000gfedcba16K32K48K64K80K72706699207089772134522605211953665

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesgfedcba60K120K180K240K300K197092196287195798197866270480256661248095

TiDB Community Server

Test: oltp_point_select - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 1fedcba13002600390052006500595459765898447144054331

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kgfedcba2040608010067.8167.3967.7266.9990.4291.3290.811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

Test: oltp_read_write - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 32gfedcba13K26K39K52K65K46993471414673746977596306152058974

Apache Hadoop

Operation: Delete - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 1000000gfedcba20K40K60K80K100K113895110803113225112613970318671590114

TiDB Community Server

Test: oltp_update_non_index - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 1gfedcba4008001200160020001705169717081693138113121328

TiDB Community Server

Test: oltp_read_write - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 1gfecba7001400210028003500319532183209248525102540

Apache Hadoop

Operation: Rename - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 1000000gfedcba20K40K60K80K100K85763858158436081208668277212973078

TiDB Community Server

Test: oltp_update_non_index - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 128gfeca11K22K33K44K55K4169541424421385286551105

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kgfedcba1.17072.34143.51214.68285.85354.1434.1384.1144.1075.0495.1495.2031. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Delete - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000gfedcba20K40K60K80K100K110828111198113327111012901479731498932

TiDB Community Server

Test: oltp_update_index - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 1gfedca30060090012001500148114831490147911891212

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pgfedcba110220330440550528.53521.52525.17526.22431.90427.69422.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 100000gfedcba20K40K60K80K100K80386794918382282102671596934875529

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512gfedcba6M12M18M24M30M227270002519900025207000246270002822700027736000279010001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_point_select - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 128fedcba30K60K90K120K150K130389129904129492149962159728159242

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57gfedcba300M600M900M1200M1500M10334000001024600000103200000010350000001254800000121420000011921000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_non_index - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 64gfedcba9K18K27K36K45K34107344703388134224391063975941281

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pgfedcba306090120150118.48118.49119.31118.95143.55138.34141.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pgfedcba130260390520650586.75585.37597.01604.99516.91542.61510.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 1000000gfedcba20K40K60K80K100K84810825018404183921746387167973239

nekRS

Input: TurboPipe Periodic

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicgfedcba2000M4000M6000M8000M10000M79649100007955790000793101000079345700006754170000675736000067677100001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pgfedcba369121511.0210.7410.9810.9112.6212.5912.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000gfedcba20K40K60K80K100K82237816338223782372771017304670522

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512gfedcba3M6M9M12M15M122560001268100012366000126830001422500014021000139090001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_index - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 64fedcb5K10K15K20K25K2106721271211082332424371

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamgfedcba81624324031.0631.0330.9931.0535.6835.6435.63

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32gfedcba17M34M51M68M85M686780006886100068846000670540007692400077019000771810001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUgfed0.25750.5150.77251.031.28751.127231.001361.144321.03749MIN: 0.93MIN: 0.92MIN: 1.07MIN: 0.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32gfedcba300M600M900M1200M1500M10471000001041900000104660000010471000001184800000119030000011835000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512gfedcba20M40M60M80M100M1001700009944100097005000995940001091400001080800001098700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57gfedcba30M60M90M120M150M1048000001057400001054800001056500001185500001140100001174900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_point_select - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 64gfedba30K60K90K120K150K118549119092118657115675130802127567

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57gfedcba13M26M39M52M65M528540005287900052827000526650005751900059296000594010001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

nekRS

Input: Kershaw

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawgfedcba2000M4000M6000M8000M10000M10500600000997645000010264000000103189000001082670000011240300000111069000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512gfedcba12M24M36M48M60M495560004997700050380000502580005516500055588000529110001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32gfedcba8M16M24M32M40M352360003527100035315000352280003945300039486000394990001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_index - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 128gfecba6K12K18K24K30K245742483024611265462746427087

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512gfedcba50M100M150M200M250M1946700001945000001960400001938500002149100002161500002160800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32gfedcba70M140M210M280M350M2774100002763900002777800002780300003067600003051100003075400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32gfedcba130M260M390M520M650M5430500005450200005451400005453600006036500006024700005942300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32gfedcba30M60M90M120M150M1384600001385800001386200001386000001536700001536900001538500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUgfedcba0.12150.2430.36450.4860.60750.490.490.490.490.540.540.54MIN: 0.3 / MAX: 8.84MIN: 0.3 / MAX: 8.2MIN: 0.3 / MAX: 9.07MIN: 0.3 / MAX: 9.28MIN: 0.45 / MAX: 5.03MIN: 0.45 / MAX: 7.81MIN: 0.45 / MAX: 7.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TiDB Community Server

Test: oltp_update_non_index - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 32fedba6K12K18K24K30K2669526285262732891428735

TiDB Community Server

Test: oltp_point_select - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 32gfedba20K40K60K80K100K96840973689690798149106180104627

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUgfedcba91827364536.9837.0136.9840.4038.7538.6638.50MIN: 32.61 / MAX: 41.91MIN: 32.25 / MAX: 43.6MIN: 32.02 / MAX: 44.78MIN: 26.93 / MAX: 74.83MIN: 37.46 / MAX: 43.52MIN: 37.22 / MAX: 43.52MIN: 36.77 / MAX: 44.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba4812162015.6915.7215.6215.7216.8917.0716.91

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUgfedcba1.09352.1873.28054.3745.46754.524.504.514.514.864.854.86MIN: 2.77 / MAX: 13.57MIN: 2.98 / MAX: 13.86MIN: 2.96 / MAX: 16.06MIN: 2.98 / MAX: 13.05MIN: 4.34 / MAX: 12.27MIN: 4.25 / MAX: 12.86MIN: 4.23 / MAX: 12.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba1.12412.24823.37234.49645.62054.97874.98774.98594.99604.63484.64764.6508

TiDB Community Server

Test: oltp_point_select - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 16gfecb15K30K45K60K75K6992370105702506540667515

TiDB Community Server

Test: oltp_update_index - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 32gedcba4K8K12K16K20K171351711717612175651781718361

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamgfedcba80160240320400325.51324.96325.74325.88347.37347.22347.66

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUgfed0.13780.27560.41340.55120.6890.6123200.6008340.5757940.603950MIN: 0.53MIN: 0.53MIN: 0.52MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_read_write - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 16gfedcba8K16K24K32K40K36088361253678436480373683695038331

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamgfedcba306090120150112.48112.41112.06112.25118.78118.95118.75

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUgfed0.28810.57620.86431.15241.44051.279181.206531.280431.25758MIN: 1.24MIN: 1.18MIN: 1.24MIN: 1.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUgfedcba4812162015.3715.3815.3615.3616.0216.0216.26MIN: 7.99 / MAX: 23.98MIN: 7.99 / MAX: 24MIN: 8.02 / MAX: 23.81MIN: 8.08 / MAX: 24.34MIN: 14.63 / MAX: 33.79MIN: 14.41 / MAX: 30.55MIN: 14.71 / MAX: 28.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57gfedcba80M160M240M320M400M3578100003504500003579900003633100003669900003669300003694300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamgfedcba306090120150144.11143.69144.10143.76145.26150.61150.59

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamgfedcba110220330440550495.60494.22494.26493.60507.48487.36485.72

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUgfedcba2468107.967.977.967.938.248.278.28MIN: 4.19 / MAX: 14.2MIN: 4.37 / MAX: 16.86MIN: 4.19 / MAX: 16.59MIN: 4.2 / MAX: 16.92MIN: 7.62 / MAX: 23.32MIN: 7.37 / MAX: 25.18MIN: 7.44 / MAX: 23.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUgfedcba71421283529.7229.9530.1030.0230.8931.0030.72MIN: 19.46 / MAX: 38.99MIN: 19.01 / MAX: 38.08MIN: 22.61 / MAX: 39.15MIN: 18.78 / MAX: 38.72MIN: 29.48 / MAX: 36.29MIN: 29.59 / MAX: 36.33MIN: 29.51 / MAX: 35.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57gfedcba40M80M120M160M200M1907500001898800001912300001889300001945100001965900001962200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kgfedcba4080120160200160.32161.85162.61163.19163.06166.38163.461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

Test: oltp_update_non_index - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 16gedba4K8K12K16K20K1873518557185631806818095

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57gfedcba150M300M450M600M750M6820700006933400006929200006891500006749300006927600006997400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kgfedcba4080120160200161.32160.80162.05161.85161.50166.69163.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUgfed0.23880.47760.71640.95521.1941.045671.061441.054251.02875MIN: 0.98MIN: 0.98MIN: 0.97MIN: 0.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUgfedcba0.07880.15760.23640.31520.3940.350.350.350.350.340.340.34MIN: 0.23 / MAX: 8.63MIN: 0.23 / MAX: 9.15MIN: 0.23 / MAX: 8.84MIN: 0.23 / MAX: 9.09MIN: 0.29 / MAX: 7.09MIN: 0.29 / MAX: 10.87MIN: 0.29 / MAX: 7.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUgfed0.57721.15441.73162.30882.8862.514412.497142.565222.49408MIN: 2.3MIN: 2.26MIN: 2.32MIN: 2.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Kripke

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6gfed50M100M150M200M250M2371757002365910002362439002409945001. (CXX) g++ options: -O3 -fopenmp -ldl

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400gfed2040608010097.5397.9999.4298.981. (CXX) g++ options: -O3 -fopenmp

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamgfedcba2040608010073.1973.2273.2673.3174.5074.5674.32

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUgfed2004006008001000837.60849.34851.66849.16MIN: 796.61MIN: 805.8MIN: 809.45MIN: 806.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUgfed2004006008001000848.03851.49849.71838.52MIN: 807.34MIN: 807.97MIN: 805.98MIN: 796.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUgfed0.35390.70781.06171.41561.76951.551181.572821.549111.55824MIN: 1.52MIN: 1.53MIN: 1.51MIN: 1.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUgfed0.1480.2960.4440.5920.740.6477000.6531820.6576100.652259MIN: 0.57MIN: 0.57MIN: 0.57MIN: 0.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objgfed61218243023.7123.5023.5323.35MIN: 23.61 / MAX: 23.93MIN: 23.4 / MAX: 23.74MIN: 23.43 / MAX: 23.73MIN: 23.26 / MAX: 23.57

TiDB Community Server

Test: oltp_update_index - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 16gfedca3K6K9K12K15K126271269212567126221268112558

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crowngfed51015202521.8321.7721.9921.89MIN: 21.69 / MAX: 22.17MIN: 21.63 / MAX: 22.18MIN: 21.84 / MAX: 22.32MIN: 21.74 / MAX: 22.23

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUgfed0.14260.28520.42780.57040.7130.6291080.6303250.6339750.628236MIN: 0.6MIN: 0.6MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUgfed0.19140.38280.57420.76560.9570.8434920.8506910.8444340.847805MIN: 0.83MIN: 0.83MIN: 0.83MIN: 0.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba20406080100109.90110.00109.97110.11109.58109.23109.80

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUgfed2004006008001000847.42845.31841.08847.38MIN: 806.72MIN: 803.78MIN: 798.46MIN: 806.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUgfed0.86491.72982.59473.45964.32453.823813.818233.844213.81576MIN: 3.29MIN: 3.25MIN: 3.27MIN: 3.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamgfedcba112233445549.0349.0749.0149.0949.0249.1149.37

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba81624324033.3733.2833.2633.2233.4633.3833.34

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUgfed0.480.961.441.922.42.118132.130622.125702.13332MIN: 1.99MIN: 1.97MIN: 2.01MIN: 21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamgfedcba112233445548.9849.0749.0648.8749.2148.9749.01

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUgfed4008001200160020001631.991636.441639.361642.51MIN: 1581.62MIN: 1585.81MIN: 1581.93MIN: 1593.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragongfed71421283527.9127.8327.8327.74MIN: 27.81 / MAX: 28.17MIN: 27.73 / MAX: 28.13MIN: 27.72 / MAX: 28.1MIN: 27.64 / MAX: 27.98

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240gfed0.37280.74561.11841.49121.8641.6481.6571.6541.6571. (CXX) g++ options: -O3 -fopenmp

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragongfed61218243024.9624.8924.8324.85MIN: 24.9 / MAX: 25.13MIN: 24.81 / MAX: 25.06MIN: 24.76 / MAX: 24.96MIN: 24.78 / MAX: 25

OpenVKL

Benchmark: vklBenchmarkCPU Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalargfed4080120160200191191190191MIN: 13 / MAX: 3483MIN: 13 / MAX: 3484MIN: 13 / MAX: 3484MIN: 13 / MAX: 3471

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamgfedcba130260390520650608.72606.79606.76606.58605.88606.67605.76

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamgfedcba130260390520650607.16607.82607.91606.10605.92605.73605.04

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUgfed0.30190.60380.90571.20761.50951.335641.341831.338611.33789MIN: 1.31MIN: 1.31MIN: 1.31MIN: 1.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crowngfed51015202522.4222.4422.3422.39MIN: 22.22 / MAX: 22.85MIN: 22.25 / MAX: 22.78MIN: 22.15 / MAX: 22.75MIN: 22.2 / MAX: 22.85

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objgfed51015202522.2622.2722.2922.35MIN: 22.18 / MAX: 22.43MIN: 22.2 / MAX: 22.44MIN: 22.22 / MAX: 22.46MIN: 22.28 / MAX: 22.5

OpenVKL

Benchmark: vklBenchmarkCPU ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCgfed110220330440550489488487487MIN: 36 / MAX: 6969MIN: 36 / MAX: 6952MIN: 36 / MAX: 6956MIN: 36 / MAX: 6949

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200gfed91827364537.9538.0238.0738.111. (CXX) g++ options: -O3 -fopenmp

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUgfed4008001200160020001637.371636.761641.001641.92MIN: 1584.58MIN: 1585.98MIN: 1595.55MIN: 1584.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUgfed0.68931.37862.06792.75723.44653.054583.056743.063703.05991MIN: 2.97MIN: 2.97MIN: 2.97MIN: 2.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUgfed0.43150.8631.29451.7262.15751.912741.914221.917811.91374MIN: 1.88MIN: 1.88MIN: 1.88MIN: 1.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamgfedcba20406080100110.98110.89111.09111.11111.03110.92111.01

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUgfed0.76151.5232.28453.0463.80753.381563.379563.384363.37782MIN: 3.33MIN: 3.33MIN: 3.33MIN: 3.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUgfed4008001200160020001641.401642.351643.971643.99MIN: 1589.91MIN: 1586.17MIN: 1590.89MIN: 1588.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlygfed0.07650.1530.22950.3060.38250.340.340.340.34

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.1620.3240.4860.6480.810.720.720.720.72

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.1620.3240.4860.6480.810.720.720.720.72


Phoronix Test Suite v10.8.5