extra tests2

Tests for a future article. AMD EPYC 9124 16-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2310228-NE-EXTRATEST37&export=txt&sor&grs.

extra tests2ProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen Resolutionabcdefg2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads)Supermicro H13DSH (1.5 BIOS)24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbAlmaLinux 9.25.14.0-284.25.1.el9_2.x86_64 (x86_64)GCC 11.3.1 20221121ext41024x768AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads)Supermicro H13SSW (1.1 BIOS)12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123NOpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- a: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- e: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- f: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- g: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Details- OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Details- Python 3.9.16Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

extra tests2hadoop: File Status - 50 - 1000000hadoop: Open - 100 - 1000000hadoop: Open - 50 - 1000000deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamopenvino: Handwritten English Recognition FP16-INT8 - CPUdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamopenvino: Handwritten English Recognition FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamliquid-dsp: 96 - 256 - 32deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamopenvino: Face Detection FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamblender: Pabellon Barcelona - CPU-Onlyopenvino: Face Detection Retail FP16-INT8 - CPUblender: Classroom - CPU-Onlyblender: BMW27 - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Detection FP32 - CPUspecfem3d: Layered Halfspaceopenvino: Person Detection FP16 - CPUblender: Barbershop - CPU-Onlydeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streambrl-cad: VGR Performance Metricembree: Pathtracer - Crownopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Vehicle Detection FP16 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyospray: gravity_spheres_volume/dim_512/scivis/real_timeoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyospray: gravity_spheres_volume/dim_512/ao/real_timeembree: Pathtracer ISPC - Crownspecfem3d: Mount St. Helensliquid-dsp: 96 - 256 - 512ospray: gravity_spheres_volume/dim_512/pathtracer/real_timespecfem3d: Homogeneous Halfspaceopenvino: Vehicle Detection FP16-INT8 - CPUembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objopenvino: Person Vehicle Bike Detection FP16 - CPUembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonspecfem3d: Water-layered Halfspaceopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUspecfem3d: Tomographic Modelliquid-dsp: 96 - 256 - 57openvino: Road Segmentation ADAS FP16 - CPUhadoop: File Status - 50 - 100000liquid-dsp: 64 - 256 - 512liquid-dsp: 64 - 256 - 32build-linux-kernel: defconfighadoop: File Status - 100 - 1000000openvino: Face Detection FP16 - CPUremhos: Sample Remap Exampleopenvino: Face Detection FP16-INT8 - CPUliquid-dsp: 64 - 256 - 57hadoop: Open - 100 - 100000openvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Vehicle Detection FP16 - CPUhadoop: Create - 100 - 100000openvino: Vehicle Detection FP16-INT8 - CPUhadoop: Create - 100 - 1000000openvino: Person Vehicle Bike Detection FP16 - CPUhadoop: File Status - 100 - 100000liquid-dsp: 32 - 256 - 512openvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUtidb: oltp_read_write - 128tidb: oltp_read_write - 64openvino: Road Segmentation ADAS FP16 - CPUhadoop: Create - 50 - 100000hadoop: Open - 50 - 100000hadoop: Delete - 100 - 100000ospray: particle_volume/pathtracer/real_timehadoop: Delete - 50 - 100000hadoop: Create - 50 - 1000000cassandra: Writestidb: oltp_point_select - 1svt-av1: Preset 8 - Bosphorus 4Ktidb: oltp_read_write - 32hadoop: Delete - 100 - 1000000tidb: oltp_update_non_index - 1tidb: oltp_read_write - 1hadoop: Rename - 100 - 1000000tidb: oltp_update_non_index - 128svt-av1: Preset 4 - Bosphorus 4Khadoop: Delete - 50 - 1000000tidb: oltp_update_index - 1svt-av1: Preset 12 - Bosphorus 1080phadoop: Rename - 100 - 100000liquid-dsp: 2 - 256 - 512tidb: oltp_point_select - 128liquid-dsp: 32 - 256 - 57tidb: oltp_update_non_index - 64svt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080phadoop: Rename - 50 - 1000000nekrs: TurboPipe Periodicsvt-av1: Preset 4 - Bosphorus 1080phadoop: Rename - 50 - 100000liquid-dsp: 1 - 256 - 512tidb: oltp_update_index - 64deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamliquid-dsp: 2 - 256 - 32onednn: IP Shapes 1D - bf16bf16bf16 - CPUliquid-dsp: 32 - 256 - 32liquid-dsp: 8 - 256 - 512liquid-dsp: 2 - 256 - 57tidb: oltp_point_select - 64liquid-dsp: 1 - 256 - 57nekrs: Kershawliquid-dsp: 4 - 256 - 512liquid-dsp: 1 - 256 - 32tidb: oltp_update_index - 128liquid-dsp: 16 - 256 - 512liquid-dsp: 8 - 256 - 32liquid-dsp: 16 - 256 - 32liquid-dsp: 4 - 256 - 32openvino: Age Gender Recognition Retail 0013 FP16 - CPUtidb: oltp_update_non_index - 32tidb: oltp_point_select - 32openvino: Handwritten English Recognition FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamopenvino: Face Detection Retail FP16-INT8 - CPUdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamtidb: oltp_point_select - 16tidb: oltp_update_index - 32deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamonednn: IP Shapes 3D - u8s8f32 - CPUtidb: oltp_read_write - 16deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonednn: IP Shapes 3D - f32 - CPUopenvino: Weld Porosity Detection FP16 - CPUliquid-dsp: 8 - 256 - 57deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUliquid-dsp: 4 - 256 - 57svt-av1: Preset 12 - Bosphorus 4Ktidb: oltp_update_non_index - 16liquid-dsp: 16 - 256 - 57svt-av1: Preset 13 - Bosphorus 4Konednn: IP Shapes 3D - bf16bf16bf16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonednn: IP Shapes 1D - f32 - CPUkripke: easywave: e2Asean Grid + BengkuluSept2007 Source - 2400deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragon Objtidb: oltp_update_index - 16embree: Pathtracer - Crownonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - f32 - CPUdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragoneasywave: e2Asean Grid + BengkuluSept2007 Source - 240embree: Pathtracer - Asian Dragonopenvkl: vklBenchmarkCPU Scalardeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragon Objopenvkl: vklBenchmarkCPU ISPCeasywave: e2Asean Grid + BengkuluSept2007 Source - 1200onednn: Recurrent Neural Network Training - f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyopenradioss: Bumper Beamabcdefg217391321533211261265137.01141244.6949.325839.4951218.146439.4391489.1203485.6725215.6383718.9189158.924322.25051560.0330.415776.9415.98615.95282945.26201.3925300580000068.598856.011417.070680.549837.5866.4226.2120606.3833.2286884.64283.9726.885983804282.55254.88672.463577216254.9017317.222033.170.861.8413.87391.8314.236956.087111.02476577571164000016.346815.105117732873.2460.144953.57332454.0956.485367.337826.9850209085882.91842.9112.3129466522559800000748.44529101622560000220770000027.3541886792393.616.346213.94199440000042016842.2442.4437.85.89407334.17461454.885154644258100002.0314.23857577909016.024364946082987566215.0969107553665248095433190.81158974901141328254073078511055.203989321212422.9947552927901000159242119210000041281141.219510.36173239676771000012.477705221390900035.62877181000118350000010987000011749000012756759401000111069000005291100039499000270872160800003075400005942300001538500000.542873510462738.516.90744.864.650818361347.661238331118.750916.26369430000150.5897485.71758.2830.72196220000163.45918095699740000163.0130.3474.321112558109.802749.365333.341249.0092605.758605.0388111.0131194174817382210204085138.83411239.6749.166639.4681219.526339.4539489.4464488.1264215.9254717.9693159.0596321.18291546.0230.445780.4415.978515.98882986.46201.2528299540000068.661756.061403.06580.769849.0766.6426.24120728.2233.1787359.23284.9928.65210863284.22255.3672.373476851755.3925317.282028.010.861.8413.76661.8314.178356.455111.31849570971814000016.436514.460586982880.5859.912453.81352450.2656.690267.195129.4607611975836.27854.5112.1005959472571100000750.49862069610950000221210000027.241161970393.2316.791213.62200190000040485842.0942.237.795.91374254.16444374.894587164296200002.0514.03890998018315.984128846948490827214.0747380152119256661440591.322615208671513122510721295.14997314427.6866934827736000159728121420000039759138.338542.61171679675736000012.59173046140210002437135.642277019000119030000010808000011401000013080259296000112403000005558800039486000274642161500003051100006024700001536900000.542891410618038.6617.06694.854.64766751517817347.218936950118.948216.02366930000150.6055487.35998.2731196590000166.37818068692760000166.6920.3474.5553109.233249.108233.379848.9703606.6693605.7307110.91522842521858746839955153.66441237.2947.153539.4464218.516239.4183487.0522489.1106215.647716.1404164.6131321.50821551.6330.435802.6515.987215.97782987.33201.5402299980000068.628756.021418.904180.419845.2766.7226.12123484.2833.0386789.8284.3127.490850157282.67254.72671.255476252955.4037317.332029.790.871.8213.83171.8314.139956.807811.3273597771503000016.53514.8082733652881.1459.792953.69272455.5156.932767.503827.0602350795840.53849.312.0409178772564900000757.38657895622630000220680000027.4081893939393.3716.243213.79201030000040322642.1942.4337.795.9350754.16440014.887299274244000002.0514.127846915.834393740160673475214.1369058052260270480447190.41759630970311381248566827528655.049901471189431.8956715928227000149962125480000039106143.545516.90674638675417000012.61777101142250002332435.682576924000118480000010914000011855000057519000108267000005516500039453000265462149100003067600006036500001536700000.5438.7516.88634.864.63486540617565347.372837368118.780116.02366990000145.2562507.47868.2430.89194510000163.055674930000161.4950.3474.503112681109.584749.017333.463749.2055605.8765605.9183111.025181818212484392783191599.2079395.6616.164813.069472.45513.1312163.5559162.850271.9189240.552955.6132108.909532.5910.472013.775.574695.570011039.6171.137106520000024.483320.03508.087224.153540.88182.997244958.0790.0332002.62106.971.614294327107.02670.87257.272829806421.4811124.12797.640.340.725.453290.725.6074722.58526.7364141182862500006.5874535.5716849081175.6724.687222.26141036.9923.873328.364362.4417495852564.78370.5727.3309855881120800000344.67632911282920000105950000055.173600601761.5930.761398.52109330000052910174.8174.7164.4110.01579716.79712967.75917162737600003.1121.57597275533423.258617578035105708151.90510101072134197866589866.988469771126131693812084.1071110121479526.2168210224627000129492103500000034224118.946604.98683921793457000010.9182372126830002110831.0525670540001.03749104710000099594000105650000115675526650001031890000050258000352280001938500002780300005453600001386000000.49262739814940.415.72464.514.99617612325.87630.6039536480112.24991.2575815.36363310000143.7643493.5967.9330.02188930000163.18918563689150000161.8541.028750.352.4940824099450098.9873.3058849.163838.5161.558240.65225923.35311262221.89430.6282360.847805110.1114847.383.8157649.086333.22422.1333248.87291642.5127.74381.65724.845191606.5773606.1011.3378922.392222.351748738.1051641.923.059911.91374111.10813.377821643.990.340.720.7232092412048192510041599.1543432.3216.139212.943372.657113.1187162.9298163.136171.9146240.234955.4634109.0938530.9910.472007.535.541075.563531039.8271.2727106510000024.472520511.4098224.13544.18182.5671.4444933.2790.3132032.06107.2470.189028506107.27670.64257.89429612521.4357123.61793.750.340.725.461530.725.620422.569426.7991434462858800006.582735.0301348891174.624.734322.15771028.6423.939328.314162.3251468282562.54373.6427.4598213081117800000342.81389105281830000105750000055.093235627761.1630.845398.91109540000029498574.5474.564.6810.06588246.8700577.776134972734800003.1121.4601455389323.325861755248698039151.50610060470897195798597667.721467371132251708320984360421384.1141133271490525.1738382225207000129904103200000033881119.307597.01184041793101000010.98482237123660002127130.9867688460001.1443210466000009700500010548000011865752827000102640000005038000035315000246111960400002777800005451400001386200000.49262859690736.9815.62454.514.98597025017117325.74160.57579436784112.05741.2804315.36357990000144.1013494.25757.9630.1191230000162.60818557692920000162.0511.054250.352.5652223624390099.41573.2609851.659849.7121.549110.6576123.52831256721.99090.6339750.844434109.9654841.0783.8442149.009433.26252.125749.05981639.3627.82931.65424.8282190606.755607.9131.3386122.342222.291148738.06716413.06371.91781111.08883.384361643.970.340.720.721795332130378112210011600.5275431.9416.130713.085372.573613.0934162.8976162.929471.9401240.164255.5428109.09533.7410.482004.765.57325.555811038.4771.044106530000024.523820.01508.2088223.953548.78181.771.9644968.4390.2631951.64106.7670.542255905107.39667.87257.504629560321.5913124.3791.740.340.725.452270.725.6145422.656626.8731684552859200006.5956335.5350730011180.8524.704722.1491041.8723.935428.323761.2817691242539.97369.2626.9737573951120500000343.49709220283030000105710000055.1481964637760.5730.725399.24109460000052356074.8774.4364.3110.09593826.76705377.674784692733900003.1421.65603105495623.285834357803599404151.789699369920196287595467.393471411108031697321885815414244.1381111981483521.5187949125199000130389102460000034470118.486585.36882501795579000010.73681633126810002106731.0292688610001.001361041900000994410001057400001190925287900099764500004997700035271000248301945000002763900005450200001385800000.49266959736837.0115.7214.54.987770105324.95680.60083436125112.4131.2065315.38350450000143.6922494.22117.9729.95189880000161.847693340000160.7981.061440.352.4971423659100097.98773.2163849.344851.4941.572820.65318223.50421269221.77460.6303250.850691110.0001845.3083.8182349.071233.27512.1306249.07141636.4427.8261.65724.887191606.7852607.81711.3418322.440722.267648838.0151636.763.056741.91422110.88523.379561642.350.340.720.72203666011074206540221602.5221432.216.067213.07372.687913.0596163.2282162.993771.9003239.517655.4264109.2191538.0110.482006.095.575535.565391039.3770.9275106570000024.460720.05509.139224.123533.64183.2972.0145097.9990.6332008.03107.2469.955609165107.04669.09257.280829552221.5847123.41793.90.340.725.477250.725.6227822.774527.6966313712865300006.6008535.3786000211175.5824.819322.19011031.623.879628.479362.8109243762557.66372.2627.7464751621118200000341.36561798281730000105620000055.1722049180759.9230.75398.13109930000046082974.5874.7164.7710.06589286.79709227.744878052740700003.1221.47599445530123.4260680546448102564151.6811039507270619709267.811469931138951705319585763416954.1431108281481528.5338038622727000103340000034107118.481586.74884810796491000011.016822371225600031.0574686780001.12723104710000010017000010480000011854952854000105006000004955600035236000245741946700002774100005430500001384600000.499684036.9815.68924.524.97876992317135325.50750.6123236088112.47791.2791815.37357810000144.1053495.60337.9629.72190750000160.32218735682070000161.3241.045670.352.5144123717570097.52973.1867837.595848.0321.551180.647723.70911262721.83050.6291080.843492109.9037847.4173.8238149.025933.36742.1181348.98371631.9927.911.64824.9619191608.7163607.16281.3356422.421522.255948937.951637.373.054581.91274110.97593.381561641.40.340.720.72OpenBenchmarking.org

Apache Hadoop

Operation: File Status - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 1000000agbdfec500K1000K1500K2000K2500K21739132036660194174818181821795332320924284252

Apache Hadoop

Operation: Open - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000fdegacb300K600K900K1200K1500K1303781124843912048191107420215332185874173822

Apache Hadoop

Operation: Open - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000fabcgde300K600K900K1200K1500K122100111261261020408683995654022278319251004

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamcbagfde110022003300440055005153.665138.835137.011602.521600.531599.211599.15

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUabcegfd300600900120015001244.691239.671237.29432.32432.20431.94395.661. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabcdefg112233445549.3349.1747.1516.1616.1416.1316.07

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcfgde91827364539.5039.4739.4513.0913.0713.0712.94

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streambcagefd50100150200250219.53218.52218.1572.6972.6672.5772.46

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambacdefg91827364539.4539.4439.4213.1313.1213.0913.06

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streambacdgef110220330440550489.45489.12487.05163.56163.23162.93162.90

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcbaegfd110220330440550489.11488.13485.67163.14162.99162.93162.85

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streambcafdeg50100150200250215.93215.65215.6471.9471.9271.9171.90

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg160320480640800718.92717.97716.14240.55240.23240.16239.52

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamcbadfeg4080120160200164.61159.06158.9255.6155.5455.4655.43

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamacbgefd70140210280350322.25321.51321.18109.22109.09109.09108.91

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUacbgfde300600900120015001560.031551.631546.02538.01533.74532.59530.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUbcagfed71421283530.4430.4330.4110.4810.4810.4710.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUcbadegf120024003600480060005802.655780.445776.942013.772007.532006.092004.761. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timecabgdfe4812162015.9872015.9860015.978505.575535.574695.573205.54107

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timebcadgef4812162015.9888015.9778015.952805.570015.565395.563535.55581

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUcbaedgf60012001800240030002987.332986.462945.261039.821039.611039.371038.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcabedfg4080120160200201.54201.39201.2571.2771.1471.0470.93

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 32acbgfde600M1200M1800M2400M3000M30058000002999800000299540000010657000001065300000106520000010651000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streambcafdeg153045607568.6668.6368.6024.5224.4824.4724.46

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUbcagdfe132639526556.0656.0256.0120.0520.0320.0120.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamcabegfd300600900120015001418.901417.071403.07511.41509.14508.21508.09

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlycabfegd5010015020025080.4180.5480.76223.95224.10224.12224.15

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUbcafedg2K4K6K8K10K9849.079845.279837.583548.783544.183540.883533.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyabcfedg408012016020066.4266.6466.72181.70182.56182.99183.29

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlycabefdg163248648026.1226.2026.2471.4471.9672.0072.01

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUcbagfde30K60K90K120K150K123484.28120728.22120606.3845097.9944968.4344958.0744933.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlycbadfeg2040608010033.0333.1733.2290.0390.2690.3190.63

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUbacegdf20K40K60K80K100K87359.2386884.6486789.8032032.0632008.0332002.6231951.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUbcagedf60120180240300284.99284.31283.97107.24107.24106.90106.761. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspaceacbgefd163248648026.8927.4928.6569.9670.1970.5471.611. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUbcafegd60120180240300284.22282.67282.55107.39107.27107.04107.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlycabfged140280420560700254.72254.88255.30667.87669.09670.64670.87

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabcefgd150300450600750672.46672.37671.26257.89257.50257.28257.27

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricabcdefg170K340K510K680K850K7721627685177625292980642961252956032955221. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crowncbafgde122436486055.4055.3954.9021.5921.5821.4821.44MIN: 53.71 / MAX: 58.99MIN: 54.02 / MAX: 57.64MIN: 53.27 / MAX: 57.28MIN: 21.45 / MAX: 21.84MIN: 21.43 / MAX: 21.89MIN: 21.32 / MAX: 21.8MIN: 21.3 / MAX: 21.78

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUcbafdeg70140210280350317.33317.28317.22124.30124.12123.61123.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUacbdgef4008001200160020002033.172029.792028.01797.64793.90793.75791.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlycbagfed0.19580.39160.58740.78320.9790.870.860.860.340.340.340.34

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlybacgfed0.4140.8281.2421.6562.071.841.841.820.720.720.720.72

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeacbgedf4812162013.8739013.8317013.766605.477255.461535.453295.45227

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlycbagfed0.41180.82361.23541.64722.0591.831.831.830.720.720.720.72

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeabcgefd4812162014.2369014.1783014.139905.622785.620405.614545.60747

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crowncbagfde132639526556.8156.4656.0922.7722.6622.5922.57MIN: 55.27 / MAX: 59.91MIN: 54.53 / MAX: 59.89MIN: 54.05 / MAX: 59.82MIN: 22.57 / MAX: 23.16MIN: 22.45 / MAX: 22.99MIN: 22.39 / MAX: 22.98MIN: 22.39 / MAX: 22.93

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensabcdefg71421283511.0211.3211.3326.7426.8026.8727.701. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 512bcagdfe150M300M450M600M750M7181400007150300007116400002865300002862500002859200002858800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timecbagfde4812162016.5350016.4365016.346806.600856.595636.587456.58270

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacebcaegfd81624324014.4614.8115.1135.0335.3835.5435.571. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUcbafdge60012001800240030002881.142880.582873.241180.851175.671175.581174.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonabcgefd132639526560.1459.9159.7924.8224.7324.7024.69MIN: 58.97 / MAX: 62MIN: 58.66 / MAX: 61.96MIN: 58.46 / MAX: 62.03MIN: 24.74 / MAX: 25MIN: 24.67 / MAX: 24.86MIN: 24.63 / MAX: 24.84MIN: 24.62 / MAX: 24.84

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objbcadgef122436486053.8153.6953.5722.2622.1922.1622.15MIN: 52.72 / MAX: 55.86MIN: 52.63 / MAX: 55.24MIN: 52.17 / MAX: 55.38MIN: 22.18 / MAX: 22.42MIN: 22.12 / MAX: 22.33MIN: 22.08 / MAX: 22.35MIN: 22.07 / MAX: 22.32

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUcabfdge50010001500200025002455.512454.092450.261041.871036.991031.601028.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objcbaefgd132639526556.9356.6956.4923.9423.9423.8823.87MIN: 55.56 / MAX: 59.67MIN: 55.42 / MAX: 58.97MIN: 55.29 / MAX: 58.38MIN: 23.84 / MAX: 24.18MIN: 23.84 / MAX: 24.16MIN: 23.79 / MAX: 24.08MIN: 23.78 / MAX: 24.08

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragoncabgdfe153045607567.5067.3467.2028.4828.3628.3228.31MIN: 65.64 / MAX: 71.17MIN: 65.61 / MAX: 70.54MIN: 65.48 / MAX: 70.41MIN: 28.37 / MAX: 28.69MIN: 28.26 / MAX: 28.59MIN: 28.23 / MAX: 28.55MIN: 28.21 / MAX: 28.56

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspaceacbfedg142842567026.9927.0629.4661.2862.3362.4462.811. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUacbdegf130026003900520065005882.915840.535836.272564.782562.542557.662539.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUbcaegdf2004006008001000854.51849.30842.91373.64372.26370.57369.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelcbafdeg71421283512.0412.1012.3126.9727.3327.4627.751. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 57bcadfge600M1200M1800M2400M3000M25711000002564900000255980000011208000001120500000111820000011178000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUcbadfeg160320480640800757.38750.49748.44344.67343.49342.81341.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: File Status - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 100000bfcdgae200K400K600K800K1000K862069709220657895632911561798529101389105

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512cabfdeg130M260M390M520M650M6226300006225600006109500002830300002829200002818300002817300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32bacdefg500M1000M1500M2000M2500M22121000002207700000220680000010595000001057500000105710000010562000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigbacefgd122436486027.2427.3527.4155.0955.1555.1755.17

Apache Hadoop

Operation: File Status - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 1000000gfcadeb400K800K1200K1600K2000K2049180196463718939391886792600601235627161970

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUbcagfed160320480640800393.23393.37393.60759.92760.57761.16761.59MIN: 360.87 / MAX: 433.13MIN: 362.57 / MAX: 433.51MIN: 363.29 / MAX: 431.61MIN: 737.63 / MAX: 771.07MIN: 741.4 / MAX: 770.88MIN: 741.99 / MAX: 776.56MIN: 738.34 / MAX: 772.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Remhos

Test: Sample Remap Example

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Examplecabfgde71421283516.2416.3516.7930.7330.7530.7630.851. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUbcagdef90180270360450213.62213.79213.94398.13398.52398.91399.24MIN: 197.2 / MAX: 235.23MIN: 197.29 / MAX: 236.32MIN: 201.64 / MAX: 242.71MIN: 379.09 / MAX: 404.71MIN: 382.1 / MAX: 404.98MIN: 386.2 / MAX: 407.29MIN: 387.9 / MAX: 408.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57cbagefd400M800M1200M1600M2000M20103000002001900000199440000010993000001095400000109460000010933000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

Operation: Open - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000dfgabce110K220K330K440K550K529101523560460829420168404858403226294985

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUbcaegdf2040608010042.0942.1942.2474.5474.5874.8174.87MIN: 37.13 / MAX: 58.71MIN: 36.21 / MAX: 65.64MIN: 36.59 / MAX: 61.56MIN: 65.97 / MAX: 82.9MIN: 67.63 / MAX: 78.73MIN: 66.88 / MAX: 80.7MIN: 66.72 / MAX: 80.961. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUbcafedg2040608010042.2042.4342.4474.4374.5074.7174.71MIN: 36.84 / MAX: 61.97MIN: 36.31 / MAX: 62.36MIN: 36.14 / MAX: 61.98MIN: 65.68 / MAX: 83.49MIN: 66.5 / MAX: 80.32MIN: 66.12 / MAX: 81.09MIN: 66.29 / MAX: 79.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUbcafdeg142842567037.7937.7937.8064.3164.4164.6864.77MIN: 32.97 / MAX: 53.7MIN: 33.29 / MAX: 54.88MIN: 33.35 / MAX: 56.45MIN: 50.85 / MAX: 70.77MIN: 37.44 / MAX: 73.04MIN: 38.02 / MAX: 72.52MIN: 55.8 / MAX: 69.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUacbdegf36912155.895.905.9110.0110.0610.0610.09MIN: 4.67 / MAX: 18.4MIN: 4.83 / MAX: 13.4MIN: 4.84 / MAX: 12.9MIN: 5.7 / MAX: 19.52MIN: 5.29 / MAX: 19.07MIN: 5.2 / MAX: 19.38MIN: 5.4 / MAX: 19.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000fgedabc13K26K39K52K65K59382589285882457971407333742535075

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUbcafdge2468104.164.164.176.766.796.796.80MIN: 3.42 / MAX: 11.2MIN: 3.43 / MAX: 10.26MIN: 3.39 / MAX: 10.07MIN: 4.04 / MAX: 15.47MIN: 3.8 / MAX: 15.48MIN: 3.79 / MAX: 15.41MIN: 4.04 / MAX: 15.371. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000dgfeabc15K30K45K60K75K71296709227053770057461454443744001

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUacbfdge2468104.884.884.897.677.707.747.77MIN: 3.95 / MAX: 16.05MIN: 3.9 / MAX: 14.94MIN: 3.93 / MAX: 13.44MIN: 5.32 / MAX: 16.6MIN: 5.51 / MAX: 16.06MIN: 6.06 / MAX: 12.66MIN: 5.42 / MAX: 16.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: File Status - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 100000cedagfb160K320K480K640K800K729927613497591716515464487805478469458716

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512bacgdef90M180M270M360M450M4296200004258100004244000002740700002737600002734800002733900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUabcdegf0.70651.4132.11952.8263.53252.032.052.053.113.113.123.14MIN: 1.66 / MAX: 7.51MIN: 1.6 / MAX: 7MIN: 1.62 / MAX: 6.96MIN: 1.94 / MAX: 11.57MIN: 1.93 / MAX: 9.72MIN: 1.88 / MAX: 11.92MIN: 1.93 / MAX: 11.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUbcaegdf51015202514.0314.1214.2321.4021.4721.5721.65MIN: 11.59 / MAX: 26.04MIN: 11.51 / MAX: 26.04MIN: 11.51 / MAX: 25.86MIN: 19.07 / MAX: 25.3MIN: 17.62 / MAX: 28.13MIN: 19.5 / MAX: 24.76MIN: 19.48 / MAX: 24.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TiDB Community Server

Test: oltp_read_write - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 128bafegd20K40K60K80K100K890998575760310601455994459727

TiDB Community Server

Test: oltp_read_write - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 64bacdgfe20K40K60K80K100K80183790907846955334553015495653893

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUcbadfeg61218243015.8315.9816.0223.2023.2823.3223.42MIN: 12.38 / MAX: 32.97MIN: 12.74 / MAX: 33.34MIN: 12.5 / MAX: 33.94MIN: 15.1 / MAX: 31.6MIN: 15.73 / MAX: 30.77MIN: 19.49 / MAX: 30.99MIN: 20.46 / MAX: 32.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000gedfcab13K26K39K52K65K60680586175861758343439374364941288

Apache Hadoop

Operation: Open - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000fdegbac120K240K360K480K600K578035578035552486546448469484460829401606

Apache Hadoop

Operation: Delete - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 100000dgfebac20K40K60K80K100K1057081025649940498039908278756673475

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeacbdfge50100150200250215.10214.14214.07151.91151.78151.68151.51

Apache Hadoop

Operation: Delete - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000gdefacb20K40K60K80K100K10395010101010060496993910759058073801

Apache Hadoop

Operation: Create - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000gdefacb16K32K48K64K80K72706721347089769920536655226052119

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writescbadgfe60K120K180K240K300K270480256661248095197866197092196287195798

TiDB Community Server

Test: oltp_point_select - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 1efdcba13002600390052006500597659545898447144054331

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kbacgefd2040608010091.3290.8190.4267.8167.7267.3966.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

Test: oltp_read_write - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 32bcafgde13K26K39K52K65K61520596305897447141469934697746737

Apache Hadoop

Operation: Delete - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 1000000gedfcab20K40K60K80K100K113895113225112613110803970319011486715

TiDB Community Server

Test: oltp_update_non_index - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 1egfdcab4008001200160020001708170516971693138113281312

TiDB Community Server

Test: oltp_read_write - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 1fegabc7001400210028003500321832093195254025102485

Apache Hadoop

Operation: Rename - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 1000000fgedabc20K40K60K80K100K85815857638436081208730787212966827

TiDB Community Server

Test: oltp_update_non_index - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 128caegf11K22K33K44K55K5286551105421384169541424

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kabcgfed1.17072.34143.51214.68285.85355.2035.1495.0494.1434.1384.1144.1071. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Delete - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000efdgabc20K40K60K80K100K113327111198111012110828989329731490147

TiDB Community Server

Test: oltp_update_index - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 1efgdac30060090012001500149014831481147912121189

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pgdefcba110220330440550528.53526.22525.17521.52431.90427.69422.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 100000edgfabc20K40K60K80K100K83822821028038679491755296934867159

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512cabefdg6M12M18M24M30M282270002790100027736000252070002519900024627000227270001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_point_select - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 128bacfed30K60K90K120K150K159728159242149962130389129904129492

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57cbadgef300M600M900M1200M1500M12548000001214200000119210000010350000001033400000103200000010246000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_non_index - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 64abcfdge9K18K27K36K45K41281397593910634470342243410733881

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pcabedfg306090120150143.55141.22138.34119.31118.95118.49118.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pdegfbca130260390520650604.99597.01586.75585.37542.61516.91510.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 1000000gedfcab20K40K60K80K100K84810840418392182501746387323971679

nekRS

Input: TurboPipe Periodic

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicgfdeabc2000M4000M6000M8000M10000M79649100007955790000793457000079310100006767710000675736000067541700001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pcbagedf369121512.6212.5912.4811.0210.9810.9110.741. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000dgefcba20K40K60K80K100K82372822378223781633771017304670522

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512cbadfeg3M6M9M12M15M142250001402100013909000126830001268100012366000122560001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_index - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 64bcedf5K10K15K20K25K2437123324212712110821067

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamefdgabc81624324030.9931.0331.0531.0635.6335.6435.68

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32abcfegd17M34M51M68M85M771810007701900076924000688610006884600068678000670540001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUfdge0.25750.5150.77251.031.28751.001361.037491.127231.14432MIN: 0.92MIN: 0.92MIN: 0.93MIN: 1.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32bcagdef300M600M900M1200M1500M11903000001184800000118350000010471000001047100000104660000010419000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512acbgdfe20M40M60M80M100M1098700001091400001080800001001700009959400099441000970050001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57cabfdeg30M60M90M120M150M1185500001174900001140100001057400001056500001054800001048000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_point_select - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 64bafegd30K60K90K120K150K130802127567119092118657118549115675

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57abcfged13M26M39M52M65M594010005929600057519000528790005285400052827000526650001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

nekRS

Input: Kershaw

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawbacgdef2000M4000M6000M8000M10000M11240300000111069000001082670000010500600000103189000001026400000099764500001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512bcaedfg12M24M36M48M60M555880005516500052911000503800005025800049977000495560001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32abcefgd8M16M24M32M40M394990003948600039453000353150003527100035236000352280001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_index - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 128bacfeg6K12K18K24K30K274642708726546248302461124574

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512bacegfd50M100M150M200M250M2161500002160800002149100001960400001946700001945000001938500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32acbdegf70M140M210M280M350M3075400003067600003051100002780300002777800002774100002763900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32cbadefg130M260M390M520M650M6036500006024700005942300005453600005451400005450200005430500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32abcedfg30M60M90M120M150M1538500001536900001536700001386200001386000001385800001384600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUdefgabc0.12150.2430.36450.4860.60750.490.490.490.490.540.540.54MIN: 0.3 / MAX: 9.28MIN: 0.3 / MAX: 9.07MIN: 0.3 / MAX: 8.2MIN: 0.3 / MAX: 8.84MIN: 0.45 / MAX: 7.64MIN: 0.45 / MAX: 7.81MIN: 0.45 / MAX: 5.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TiDB Community Server

Test: oltp_update_non_index - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 32bafed6K12K18K24K30K2891428735266952628526273

TiDB Community Server

Test: oltp_point_select - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 32badfeg20K40K60K80K100K10618010462798149973689690796840

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUegfabcd91827364536.9836.9837.0138.5038.6638.7540.40MIN: 32.02 / MAX: 44.78MIN: 32.61 / MAX: 41.91MIN: 32.25 / MAX: 43.6MIN: 36.77 / MAX: 44.23MIN: 37.22 / MAX: 43.52MIN: 37.46 / MAX: 43.52MIN: 26.93 / MAX: 74.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamegfdcab4812162015.6215.6915.7215.7216.8916.9117.07

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUfdegbac1.09352.1873.28054.3745.46754.504.514.514.524.854.864.86MIN: 2.98 / MAX: 13.86MIN: 2.98 / MAX: 13.05MIN: 2.96 / MAX: 16.06MIN: 2.77 / MAX: 13.57MIN: 4.25 / MAX: 12.86MIN: 4.23 / MAX: 12.81MIN: 4.34 / MAX: 12.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamcbagefd1.12412.24823.37234.49645.62054.63484.64764.65084.97874.98594.98774.9960

TiDB Community Server

Test: oltp_point_select - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 16efgbc15K30K45K60K75K7025070105699236751565406

TiDB Community Server

Test: oltp_update_index - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 32abdcge4K8K12K16K20K183611781717612175651713517117

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamfgedbca80160240320400324.96325.51325.74325.88347.22347.37347.66

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUefdg0.13780.27560.41340.55120.6890.5757940.6008340.6039500.612320MIN: 0.52MIN: 0.53MIN: 0.53MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_read_write - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 16acbedfg8K16K24K32K40K38331373683695036784364803612536088

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamedfgacb306090120150112.06112.25112.41112.48118.75118.78118.95

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUfdge0.28810.57620.86431.15241.44051.206531.257581.279181.28043MIN: 1.18MIN: 1.21MIN: 1.24MIN: 1.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUdegfbca4812162015.3615.3615.3715.3816.0216.0216.26MIN: 8.08 / MAX: 24.34MIN: 8.02 / MAX: 23.81MIN: 7.99 / MAX: 23.98MIN: 7.99 / MAX: 24MIN: 14.41 / MAX: 30.55MIN: 14.63 / MAX: 33.79MIN: 14.71 / MAX: 28.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57acbdegf80M160M240M320M400M3694300003669900003669300003633100003579900003578100003504500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamfdegcab306090120150143.69143.76144.10144.11145.26150.59150.61

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabdfegc110220330440550485.72487.36493.60494.22494.26495.60507.48

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUdegfcba2468107.937.967.967.978.248.278.28MIN: 4.2 / MAX: 16.92MIN: 4.19 / MAX: 16.59MIN: 4.19 / MAX: 14.2MIN: 4.37 / MAX: 16.86MIN: 7.62 / MAX: 23.32MIN: 7.37 / MAX: 25.18MIN: 7.44 / MAX: 23.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUgfdeacb71421283529.7229.9530.0230.1030.7230.8931.00MIN: 19.46 / MAX: 38.99MIN: 19.01 / MAX: 38.08MIN: 18.78 / MAX: 38.72MIN: 22.61 / MAX: 39.15MIN: 29.51 / MAX: 35.07MIN: 29.48 / MAX: 36.29MIN: 29.59 / MAX: 36.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57bacegfd40M80M120M160M200M1965900001962200001945100001912300001907500001898800001889300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kbadcefg4080120160200166.38163.46163.19163.06162.61161.85160.321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

Test: oltp_update_non_index - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 16gdeab4K8K12K16K20K1873518563185571809518068

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57afebdgc150M300M450M600M750M6997400006933400006929200006927600006891500006820700006749300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kbaedcgf4080120160200166.69163.01162.05161.85161.50161.32160.801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUdgef0.23880.47760.71640.95521.1941.028751.045671.054251.06144MIN: 0.96MIN: 0.98MIN: 0.97MIN: 0.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabcdefg0.07880.15760.23640.31520.3940.340.340.340.350.350.350.35MIN: 0.29 / MAX: 7.33MIN: 0.29 / MAX: 10.87MIN: 0.29 / MAX: 7.09MIN: 0.23 / MAX: 9.09MIN: 0.23 / MAX: 8.84MIN: 0.23 / MAX: 9.15MIN: 0.23 / MAX: 8.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUdfge0.57721.15441.73162.30882.8862.494082.497142.514412.56522MIN: 2.3MIN: 2.26MIN: 2.3MIN: 2.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Kripke

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6dgfe50M100M150M200M250M2409945002371757002365910002362439001. (CXX) g++ options: -O3 -fopenmp -ldl

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400gfde2040608010097.5397.9998.9899.421. (CXX) g++ options: -O3 -fopenmp

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamgfedacb2040608010073.1973.2273.2673.3174.3274.5074.56

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUgdfe2004006008001000837.60849.16849.34851.66MIN: 796.61MIN: 806.44MIN: 805.8MIN: 809.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUdgef2004006008001000838.52848.03849.71851.49MIN: 796.3MIN: 807.34MIN: 805.98MIN: 807.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUegdf0.35390.70781.06171.41561.76951.549111.551181.558241.57282MIN: 1.51MIN: 1.52MIN: 1.51MIN: 1.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUgdfe0.1480.2960.4440.5920.740.6477000.6522590.6531820.657610MIN: 0.57MIN: 0.57MIN: 0.57MIN: 0.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objgefd61218243023.7123.5323.5023.35MIN: 23.61 / MAX: 23.93MIN: 23.43 / MAX: 23.73MIN: 23.4 / MAX: 23.74MIN: 23.26 / MAX: 23.57

TiDB Community Server

Test: oltp_update_index - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 16fcgdea3K6K9K12K15K126921268112627126221256712558

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownedgf51015202521.9921.8921.8321.77MIN: 21.84 / MAX: 22.32MIN: 21.74 / MAX: 22.23MIN: 21.69 / MAX: 22.17MIN: 21.63 / MAX: 22.18

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUdgfe0.14260.28520.42780.57040.7130.6282360.6291080.6303250.633975MIN: 0.6MIN: 0.6MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUgedf0.19140.38280.57420.76560.9570.8434920.8444340.8478050.850691MIN: 0.83MIN: 0.83MIN: 0.83MIN: 0.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streambcagefd20406080100109.23109.58109.80109.90109.97110.00110.11

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUefdg2004006008001000841.08845.31847.38847.42MIN: 798.46MIN: 803.78MIN: 806.33MIN: 806.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUdfge0.86491.72982.59473.45964.32453.815763.818233.823813.84421MIN: 3.26MIN: 3.25MIN: 3.29MIN: 3.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamecgfdba112233445549.0149.0249.0349.0749.0949.1149.37

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdefagbc81624324033.2233.2633.2833.3433.3733.3833.46

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUgefd0.480.961.441.922.42.118132.125702.130622.13332MIN: 1.99MIN: 2.01MIN: 1.97MIN: 21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdbgaefc112233445548.8748.9748.9849.0149.0649.0749.21

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUgfed4008001200160020001631.991636.441639.361642.51MIN: 1581.62MIN: 1585.81MIN: 1581.93MIN: 1593.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragongefd71421283527.9127.8327.8327.74MIN: 27.81 / MAX: 28.17MIN: 27.72 / MAX: 28.1MIN: 27.73 / MAX: 28.13MIN: 27.64 / MAX: 27.98

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240gedf0.37280.74561.11841.49121.8641.6481.6541.6571.6571. (CXX) g++ options: -O3 -fopenmp

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragongfde61218243024.9624.8924.8524.83MIN: 24.9 / MAX: 25.13MIN: 24.81 / MAX: 25.06MIN: 24.78 / MAX: 25MIN: 24.76 / MAX: 24.96

OpenVKL

Benchmark: vklBenchmarkCPU Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalargfde4080120160200191191191190MIN: 13 / MAX: 3483MIN: 13 / MAX: 3484MIN: 13 / MAX: 3471MIN: 13 / MAX: 3484

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamacdbefg130260390520650605.76605.88606.58606.67606.76606.79608.72

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcdgfe130260390520650605.04605.73605.92606.10607.16607.82607.91

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUgdef0.30190.60380.90571.20761.50951.335641.337891.338611.34183MIN: 1.31MIN: 1.31MIN: 1.31MIN: 1.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownfgde51015202522.4422.4222.3922.34MIN: 22.25 / MAX: 22.78MIN: 22.22 / MAX: 22.85MIN: 22.2 / MAX: 22.85MIN: 22.15 / MAX: 22.75

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objdefg51015202522.3522.2922.2722.26MIN: 22.28 / MAX: 22.5MIN: 22.22 / MAX: 22.46MIN: 22.2 / MAX: 22.44MIN: 22.18 / MAX: 22.43

OpenVKL

Benchmark: vklBenchmarkCPU ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCgfed110220330440550489488487487MIN: 36 / MAX: 6969MIN: 36 / MAX: 6952MIN: 36 / MAX: 6956MIN: 36 / MAX: 6949

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200gfed91827364537.9538.0238.0738.111. (CXX) g++ options: -O3 -fopenmp

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUfged4008001200160020001636.761637.371641.001641.92MIN: 1585.98MIN: 1584.58MIN: 1595.55MIN: 1584.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUgfde0.68931.37862.06792.75723.44653.054583.056743.059913.06370MIN: 2.97MIN: 2.97MIN: 2.96MIN: 2.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUgdfe0.43150.8631.29451.7262.15751.912741.913741.914221.91781MIN: 1.88MIN: 1.88MIN: 1.88MIN: 1.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamfbgaced20406080100110.89110.92110.98111.01111.03111.09111.11

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUdfge0.76151.5232.28453.0463.80753.377823.379563.381563.38436MIN: 3.33MIN: 3.33MIN: 3.33MIN: 3.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUgfed4008001200160020001641.401642.351643.971643.99MIN: 1589.91MIN: 1586.17MIN: 1590.89MIN: 1588.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlygfed0.07650.1530.22950.3060.38250.340.340.340.34

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.1620.3240.4860.6480.810.720.720.720.72

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.1620.3240.4860.6480.810.720.720.720.72


Phoronix Test Suite v10.8.5