extra tests2

Tests for a future article. AMD EPYC 9124 16-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2310228-NE-EXTRATEST37&rdt&grs.

extra tests2ProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen Resolutionabcdefg2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads)Supermicro H13DSH (1.5 BIOS)24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbAlmaLinux 9.25.14.0-284.25.1.el9_2.x86_64 (x86_64)GCC 11.3.1 20221121ext41024x768AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads)Supermicro H13SSW (1.1 BIOS)12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123NOpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- a: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- e: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- f: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- g: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Details- OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Details- Python 3.9.16Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

extra tests2hadoop: File Status - 50 - 1000000hadoop: Open - 100 - 1000000hadoop: Open - 50 - 1000000deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamopenvino: Handwritten English Recognition FP16-INT8 - CPUdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamopenvino: Handwritten English Recognition FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamliquid-dsp: 96 - 256 - 32deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamopenvino: Face Detection FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamblender: Pabellon Barcelona - CPU-Onlyopenvino: Face Detection Retail FP16-INT8 - CPUblender: Classroom - CPU-Onlyblender: BMW27 - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Detection FP32 - CPUspecfem3d: Layered Halfspaceopenvino: Person Detection FP16 - CPUblender: Barbershop - CPU-Onlydeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streambrl-cad: VGR Performance Metricembree: Pathtracer - Crownopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Vehicle Detection FP16 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyospray: gravity_spheres_volume/dim_512/scivis/real_timeoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyospray: gravity_spheres_volume/dim_512/ao/real_timeembree: Pathtracer ISPC - Crownspecfem3d: Mount St. Helensliquid-dsp: 96 - 256 - 512ospray: gravity_spheres_volume/dim_512/pathtracer/real_timespecfem3d: Homogeneous Halfspaceopenvino: Vehicle Detection FP16-INT8 - CPUembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objopenvino: Person Vehicle Bike Detection FP16 - CPUembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonspecfem3d: Water-layered Halfspaceopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUspecfem3d: Tomographic Modelliquid-dsp: 96 - 256 - 57openvino: Road Segmentation ADAS FP16 - CPUhadoop: File Status - 50 - 100000liquid-dsp: 64 - 256 - 512liquid-dsp: 64 - 256 - 32build-linux-kernel: defconfighadoop: File Status - 100 - 1000000openvino: Face Detection FP16 - CPUremhos: Sample Remap Exampleopenvino: Face Detection FP16-INT8 - CPUliquid-dsp: 64 - 256 - 57hadoop: Open - 100 - 100000openvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Vehicle Detection FP16 - CPUhadoop: Create - 100 - 100000openvino: Vehicle Detection FP16-INT8 - CPUhadoop: Create - 100 - 1000000openvino: Person Vehicle Bike Detection FP16 - CPUhadoop: File Status - 100 - 100000liquid-dsp: 32 - 256 - 512openvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUtidb: oltp_read_write - 128tidb: oltp_read_write - 64openvino: Road Segmentation ADAS FP16 - CPUhadoop: Create - 50 - 100000hadoop: Open - 50 - 100000hadoop: Delete - 100 - 100000ospray: particle_volume/pathtracer/real_timehadoop: Delete - 50 - 100000hadoop: Create - 50 - 1000000cassandra: Writestidb: oltp_point_select - 1svt-av1: Preset 8 - Bosphorus 4Ktidb: oltp_read_write - 32hadoop: Delete - 100 - 1000000tidb: oltp_update_non_index - 1tidb: oltp_read_write - 1hadoop: Rename - 100 - 1000000tidb: oltp_update_non_index - 128svt-av1: Preset 4 - Bosphorus 4Khadoop: Delete - 50 - 1000000tidb: oltp_update_index - 1svt-av1: Preset 12 - Bosphorus 1080phadoop: Rename - 100 - 100000liquid-dsp: 2 - 256 - 512tidb: oltp_point_select - 128liquid-dsp: 32 - 256 - 57tidb: oltp_update_non_index - 64svt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080phadoop: Rename - 50 - 1000000nekrs: TurboPipe Periodicsvt-av1: Preset 4 - Bosphorus 1080phadoop: Rename - 50 - 100000liquid-dsp: 1 - 256 - 512tidb: oltp_update_index - 64deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamliquid-dsp: 2 - 256 - 32onednn: IP Shapes 1D - bf16bf16bf16 - CPUliquid-dsp: 32 - 256 - 32liquid-dsp: 8 - 256 - 512liquid-dsp: 2 - 256 - 57tidb: oltp_point_select - 64liquid-dsp: 1 - 256 - 57nekrs: Kershawliquid-dsp: 4 - 256 - 512liquid-dsp: 1 - 256 - 32tidb: oltp_update_index - 128liquid-dsp: 16 - 256 - 512liquid-dsp: 8 - 256 - 32liquid-dsp: 16 - 256 - 32liquid-dsp: 4 - 256 - 32openvino: Age Gender Recognition Retail 0013 FP16 - CPUtidb: oltp_update_non_index - 32tidb: oltp_point_select - 32openvino: Handwritten English Recognition FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamopenvino: Face Detection Retail FP16-INT8 - CPUdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamtidb: oltp_point_select - 16tidb: oltp_update_index - 32deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamonednn: IP Shapes 3D - u8s8f32 - CPUtidb: oltp_read_write - 16deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonednn: IP Shapes 3D - f32 - CPUopenvino: Weld Porosity Detection FP16 - CPUliquid-dsp: 8 - 256 - 57deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUliquid-dsp: 4 - 256 - 57svt-av1: Preset 12 - Bosphorus 4Ktidb: oltp_update_non_index - 16liquid-dsp: 16 - 256 - 57svt-av1: Preset 13 - Bosphorus 4Konednn: IP Shapes 3D - bf16bf16bf16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonednn: IP Shapes 1D - f32 - CPUkripke: easywave: e2Asean Grid + BengkuluSept2007 Source - 2400deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragon Objtidb: oltp_update_index - 16embree: Pathtracer - Crownonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - f32 - CPUdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragoneasywave: e2Asean Grid + BengkuluSept2007 Source - 240embree: Pathtracer - Asian Dragonopenvkl: vklBenchmarkCPU Scalardeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragon Objopenvkl: vklBenchmarkCPU ISPCeasywave: e2Asean Grid + BengkuluSept2007 Source - 1200onednn: Recurrent Neural Network Training - f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyopenradioss: Bumper Beamabcdefg217391321533211261265137.01141244.6949.325839.4951218.146439.4391489.1203485.6725215.6383718.9189158.924322.25051560.0330.415776.9415.98615.95282945.26201.3925300580000068.598856.011417.070680.549837.5866.4226.2120606.3833.2286884.64283.9726.885983804282.55254.88672.463577216254.9017317.222033.170.861.8413.87391.8314.236956.087111.02476577571164000016.346815.105117732873.2460.144953.57332454.0956.485367.337826.9850209085882.91842.9112.3129466522559800000748.44529101622560000220770000027.3541886792393.616.346213.94199440000042016842.2442.4437.85.89407334.17461454.885154644258100002.0314.23857577909016.024364946082987566215.0969107553665248095433190.81158974901141328254073078511055.203989321212422.9947552927901000159242119210000041281141.219510.36173239676771000012.477705221390900035.62877181000118350000010987000011749000012756759401000111069000005291100039499000270872160800003075400005942300001538500000.542873510462738.516.90744.864.650818361347.661238331118.750916.26369430000150.5897485.71758.2830.72196220000163.45918095699740000163.0130.3474.321112558109.802749.365333.341249.0092605.758605.0388111.0131194174817382210204085138.83411239.6749.166639.4681219.526339.4539489.4464488.1264215.9254717.9693159.0596321.18291546.0230.445780.4415.978515.98882986.46201.2528299540000068.661756.061403.06580.769849.0766.6426.24120728.2233.1787359.23284.9928.65210863284.22255.3672.373476851755.3925317.282028.010.861.8413.76661.8314.178356.455111.31849570971814000016.436514.460586982880.5859.912453.81352450.2656.690267.195129.4607611975836.27854.5112.1005959472571100000750.49862069610950000221210000027.241161970393.2316.791213.62200190000040485842.0942.237.795.91374254.16444374.894587164296200002.0514.03890998018315.984128846948490827214.0747380152119256661440591.322615208671513122510721295.14997314427.6866934827736000159728121420000039759138.338542.61171679675736000012.59173046140210002437135.642277019000119030000010808000011401000013080259296000112403000005558800039486000274642161500003051100006024700001536900000.542891410618038.6617.06694.854.64766751517817347.218936950118.948216.02366930000150.6055487.35998.2731196590000166.37818068692760000166.6920.3474.5553109.233249.108233.379848.9703606.6693605.7307110.91522842521858746839955153.66441237.2947.153539.4464218.516239.4183487.0522489.1106215.647716.1404164.6131321.50821551.6330.435802.6515.987215.97782987.33201.5402299980000068.628756.021418.904180.419845.2766.7226.12123484.2833.0386789.8284.3127.490850157282.67254.72671.255476252955.4037317.332029.790.871.8213.83171.8314.139956.807811.3273597771503000016.53514.8082733652881.1459.792953.69272455.5156.932767.503827.0602350795840.53849.312.0409178772564900000757.38657895622630000220680000027.4081893939393.3716.243213.79201030000040322642.1942.4337.795.9350754.16440014.887299274244000002.0514.127846915.834393740160673475214.1369058052260270480447190.41759630970311381248566827528655.049901471189431.8956715928227000149962125480000039106143.545516.90674638675417000012.61777101142250002332435.682576924000118480000010914000011855000057519000108267000005516500039453000265462149100003067600006036500001536700000.5438.7516.88634.864.63486540617565347.372837368118.780116.02366990000145.2562507.47868.2430.89194510000163.055674930000161.4950.3474.503112681109.584749.017333.463749.2055605.8765605.9183111.025181818212484392783191599.2079395.6616.164813.069472.45513.1312163.5559162.850271.9189240.552955.6132108.909532.5910.472013.775.574695.570011039.6171.137106520000024.483320.03508.087224.153540.88182.997244958.0790.0332002.62106.971.614294327107.02670.87257.272829806421.4811124.12797.640.340.725.453290.725.6074722.58526.7364141182862500006.5874535.5716849081175.6724.687222.26141036.9923.873328.364362.4417495852564.78370.5727.3309855881120800000344.67632911282920000105950000055.173600601761.5930.761398.52109330000052910174.8174.7164.4110.01579716.79712967.75917162737600003.1121.57597275533423.258617578035105708151.90510101072134197866589866.988469771126131693812084.1071110121479526.2168210224627000129492103500000034224118.946604.98683921793457000010.9182372126830002110831.0525670540001.03749104710000099594000105650000115675526650001031890000050258000352280001938500002780300005453600001386000000.49262739814940.415.72464.514.99617612325.87630.6039536480112.24991.2575815.36363310000143.7643493.5967.9330.02188930000163.18918563689150000161.8541.028750.352.4940824099450098.9873.3058849.163838.5161.558240.65225923.35311262221.89430.6282360.847805110.1114847.383.8157649.086333.22422.1333248.87291642.5127.74381.65724.845191606.5773606.1011.3378922.392222.351748738.1051641.923.059911.91374111.10813.377821643.990.340.720.7232092412048192510041599.1543432.3216.139212.943372.657113.1187162.9298163.136171.9146240.234955.4634109.0938530.9910.472007.535.541075.563531039.8271.2727106510000024.472520511.4098224.13544.18182.5671.4444933.2790.3132032.06107.2470.189028506107.27670.64257.89429612521.4357123.61793.750.340.725.461530.725.620422.569426.7991434462858800006.582735.0301348891174.624.734322.15771028.6423.939328.314162.3251468282562.54373.6427.4598213081117800000342.81389105281830000105750000055.093235627761.1630.845398.91109540000029498574.5474.564.6810.06588246.8700577.776134972734800003.1121.4601455389323.325861755248698039151.50610060470897195798597667.721467371132251708320984360421384.1141133271490525.1738382225207000129904103200000033881119.307597.01184041793101000010.98482237123660002127130.9867688460001.1443210466000009700500010548000011865752827000102640000005038000035315000246111960400002777800005451400001386200000.49262859690736.9815.62454.514.98597025017117325.74160.57579436784112.05741.2804315.36357990000144.1013494.25757.9630.1191230000162.60818557692920000162.0511.054250.352.5652223624390099.41573.2609851.659849.7121.549110.6576123.52831256721.99090.6339750.844434109.9654841.0783.8442149.009433.26252.125749.05981639.3627.82931.65424.8282190606.755607.9131.3386122.342222.291148738.06716413.06371.91781111.08883.384361643.970.340.720.721795332130378112210011600.5275431.9416.130713.085372.573613.0934162.8976162.929471.9401240.164255.5428109.09533.7410.482004.765.57325.555811038.4771.044106530000024.523820.01508.2088223.953548.78181.771.9644968.4390.2631951.64106.7670.542255905107.39667.87257.504629560321.5913124.3791.740.340.725.452270.725.6145422.656626.8731684552859200006.5956335.5350730011180.8524.704722.1491041.8723.935428.323761.2817691242539.97369.2626.9737573951120500000343.49709220283030000105710000055.1481964637760.5730.725399.24109460000052356074.8774.4364.3110.09593826.76705377.674784692733900003.1421.65603105495623.285834357803599404151.789699369920196287595467.393471411108031697321885815414244.1381111981483521.5187949125199000130389102460000034470118.486585.36882501795579000010.73681633126810002106731.0292688610001.001361041900000994410001057400001190925287900099764500004997700035271000248301945000002763900005450200001385800000.49266959736837.0115.7214.54.987770105324.95680.60083436125112.4131.2065315.38350450000143.6922494.22117.9729.95189880000161.847693340000160.7981.061440.352.4971423659100097.98773.2163849.344851.4941.572820.65318223.50421269221.77460.6303250.850691110.0001845.3083.8182349.071233.27512.1306249.07141636.4427.8261.65724.887191606.7852607.81711.3418322.440722.267648838.0151636.763.056741.91422110.88523.379561642.350.340.720.72203666011074206540221602.5221432.216.067213.07372.687913.0596163.2282162.993771.9003239.517655.4264109.2191538.0110.482006.095.575535.565391039.3770.9275106570000024.460720.05509.139224.123533.64183.2972.0145097.9990.6332008.03107.2469.955609165107.04669.09257.280829552221.5847123.41793.90.340.725.477250.725.6227822.774527.6966313712865300006.6008535.3786000211175.5824.819322.19011031.623.879628.479362.8109243762557.66372.2627.7464751621118200000341.36561798281730000105620000055.1722049180759.9230.75398.13109930000046082974.5874.7164.7710.06589286.79709227.744878052740700003.1221.47599445530123.4260680546448102564151.6811039507270619709267.811469931138951705319585763416954.1431108281481528.5338038622727000103340000034107118.481586.74884810796491000011.016822371225600031.0574686780001.12723104710000010017000010480000011854952854000105006000004955600035236000245741946700002774100005430500001384600000.499684036.9815.68924.524.97876992317135325.50750.6123236088112.47791.2791815.37357810000144.1053495.60337.9629.72190750000160.32218735682070000161.3241.045670.352.5144123717570097.52973.1867837.595848.0321.551180.647723.70911262721.83050.6291080.843492109.9037847.4173.8238149.025933.36742.1181348.98371631.9927.911.64824.9619191608.7163607.16281.3356422.421522.255948937.951637.373.054581.91274110.97593.381561641.40.340.720.72OpenBenchmarking.org

Apache Hadoop

Operation: File Status - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 1000000abcdefg500K1000K1500K2000K2500K21739131941748284252181818232092417953322036660

Apache Hadoop

Operation: Open - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000abcdefg300K600K900K1200K1500K2153321738221858741248439120481913037811107420

Apache Hadoop

Operation: Open - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000abcdefg300K600K900K1200K1500K112612610204086839952783192510041221001654022

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg110022003300440055005137.015138.835153.661599.211599.151600.531602.52

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUabcdefg300600900120015001244.691239.671237.29395.66432.32431.94432.201. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabcdefg112233445549.3349.1747.1516.1616.1416.1316.07

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcdefg91827364539.5039.4739.4513.0712.9413.0913.07

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg50100150200250218.15219.53218.5272.4672.6672.5772.69

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcdefg91827364539.4439.4539.4213.1313.1213.0913.06

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcdefg110220330440550489.12489.45487.05163.56162.93162.90163.23

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcdefg110220330440550485.67488.13489.11162.85163.14162.93162.99

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabcdefg50100150200250215.64215.93215.6571.9271.9171.9471.90

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg160320480640800718.92717.97716.14240.55240.23240.16239.52

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabcdefg4080120160200158.92159.06164.6155.6155.4655.5455.43

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcdefg70140210280350322.25321.18321.51108.91109.09109.09109.22

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUabcdefg300600900120015001560.031546.021551.63532.59530.99533.74538.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUabcdefg71421283530.4130.4430.4310.4710.4710.4810.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUabcdefg120024003600480060005776.945780.445802.652013.772007.532004.762006.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeabcdefg4812162015.9860015.9785015.987205.574695.541075.573205.57553

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeabcdefg4812162015.9528015.9888015.977805.570015.563535.555815.56539

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUabcdefg60012001800240030002945.262986.462987.331039.611039.821038.471039.371. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabcdefg4080120160200201.39201.25201.5471.1471.2771.0470.93

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 32abcdefg600M1200M1800M2400M3000M30058000002995400000299980000010652000001065100000106530000010657000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcdefg153045607568.6068.6668.6324.4824.4724.5224.46

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUabcdefg132639526556.0156.0656.0220.0320.0020.0120.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg300600900120015001417.071403.071418.90508.09511.41508.21509.14

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlyabcdefg5010015020025080.5480.7680.41224.15224.10223.95224.12

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUabcdefg2K4K6K8K10K9837.589849.079845.273540.883544.183548.783533.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyabcdefg408012016020066.4266.6466.72182.99182.56181.70183.29

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyabcdefg163248648026.2026.2426.1272.0071.4471.9672.01

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabcdefg30K60K90K120K150K120606.38120728.22123484.2844958.0744933.2744968.4345097.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyabcdefg2040608010033.2233.1733.0390.0390.3190.2690.63

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabcdefg20K40K60K80K100K86884.6487359.2386789.8032002.6232032.0631951.6432008.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUabcdefg60120180240300283.97284.99284.31106.90107.24106.76107.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspaceabcdefg163248648026.8928.6527.4971.6170.1970.5469.961. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUabcdefg60120180240300282.55284.22282.67107.02107.27107.39107.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyabcdefg140280420560700254.88255.30254.72670.87670.64667.87669.09

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabcdefg150300450600750672.46672.37671.26257.27257.89257.50257.28

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricabcdefg170K340K510K680K850K7721627685177625292980642961252956032955221. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crownabcdefg122436486054.9055.3955.4021.4821.4421.5921.58MIN: 53.27 / MAX: 57.28MIN: 54.02 / MAX: 57.64MIN: 53.71 / MAX: 58.99MIN: 21.32 / MAX: 21.8MIN: 21.3 / MAX: 21.78MIN: 21.45 / MAX: 21.84MIN: 21.43 / MAX: 21.89

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUabcdefg70140210280350317.22317.28317.33124.12123.61124.30123.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUabcdefg4008001200160020002033.172028.012029.79797.64793.75791.74793.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyabcdefg0.19580.39160.58740.78320.9790.860.860.870.340.340.340.34

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyabcdefg0.4140.8281.2421.6562.071.841.841.820.720.720.720.72

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeabcdefg4812162013.8739013.7666013.831705.453295.461535.452275.47725

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyabcdefg0.41180.82361.23541.64722.0591.831.831.830.720.720.720.72

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeabcdefg4812162014.2369014.1783014.139905.607475.620405.614545.62278

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crownabcdefg132639526556.0956.4656.8122.5922.5722.6622.77MIN: 54.05 / MAX: 59.82MIN: 54.53 / MAX: 59.89MIN: 55.27 / MAX: 59.91MIN: 22.39 / MAX: 22.98MIN: 22.39 / MAX: 22.93MIN: 22.45 / MAX: 22.99MIN: 22.57 / MAX: 23.16

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensabcdefg71421283511.0211.3211.3326.7426.8026.8727.701. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 512abcdefg150M300M450M600M750M7116400007181400007150300002862500002858800002859200002865300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeabcdefg4812162016.3468016.4365016.535006.587456.582706.595636.60085

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspaceabcdefg81624324015.1114.4614.8135.5735.0335.5435.381. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUabcdefg60012001800240030002873.242880.582881.141175.671174.601180.851175.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonabcdefg132639526560.1459.9159.7924.6924.7324.7024.82MIN: 58.97 / MAX: 62MIN: 58.66 / MAX: 61.96MIN: 58.46 / MAX: 62.03MIN: 24.62 / MAX: 24.84MIN: 24.67 / MAX: 24.86MIN: 24.63 / MAX: 24.84MIN: 24.74 / MAX: 25

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objabcdefg122436486053.5753.8153.6922.2622.1622.1522.19MIN: 52.17 / MAX: 55.38MIN: 52.72 / MAX: 55.86MIN: 52.63 / MAX: 55.24MIN: 22.18 / MAX: 22.42MIN: 22.08 / MAX: 22.35MIN: 22.07 / MAX: 22.32MIN: 22.12 / MAX: 22.33

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUabcdefg50010001500200025002454.092450.262455.511036.991028.641041.871031.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objabcdefg132639526556.4956.6956.9323.8723.9423.9423.88MIN: 55.29 / MAX: 58.38MIN: 55.42 / MAX: 58.97MIN: 55.56 / MAX: 59.67MIN: 23.78 / MAX: 24.08MIN: 23.84 / MAX: 24.18MIN: 23.84 / MAX: 24.16MIN: 23.79 / MAX: 24.08

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragonabcdefg153045607567.3467.2067.5028.3628.3128.3228.48MIN: 65.61 / MAX: 70.54MIN: 65.48 / MAX: 70.41MIN: 65.64 / MAX: 71.17MIN: 28.26 / MAX: 28.59MIN: 28.21 / MAX: 28.56MIN: 28.23 / MAX: 28.55MIN: 28.37 / MAX: 28.69

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspaceabcdefg142842567026.9929.4627.0662.4462.3361.2862.811. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUabcdefg130026003900520065005882.915836.275840.532564.782562.542539.972557.661. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabcdefg2004006008001000842.91854.51849.30370.57373.64369.26372.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelabcdefg71421283512.3112.1012.0427.3327.4626.9727.751. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 57abcdefg600M1200M1800M2400M3000M25598000002571100000256490000011208000001117800000112050000011182000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUabcdefg160320480640800748.44750.49757.38344.67342.81343.49341.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: File Status - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 100000abcdefg200K400K600K800K1000K529101862069657895632911389105709220561798

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512abcdefg130M260M390M520M650M6225600006109500006226300002829200002818300002830300002817300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32abcdefg500M1000M1500M2000M2500M22077000002212100000220680000010595000001057500000105710000010562000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigabcdefg122436486027.3527.2427.4155.1755.0955.1555.17

Apache Hadoop

Operation: File Status - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 1000000abcdefg400K800K1200K1600K2000K1886792161970189393960060123562719646372049180

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUabcdefg160320480640800393.60393.23393.37761.59761.16760.57759.92MIN: 363.29 / MAX: 431.61MIN: 360.87 / MAX: 433.13MIN: 362.57 / MAX: 433.51MIN: 738.34 / MAX: 772.36MIN: 741.99 / MAX: 776.56MIN: 741.4 / MAX: 770.88MIN: 737.63 / MAX: 771.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Remhos

Test: Sample Remap Example

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Exampleabcdefg71421283516.3516.7916.2430.7630.8530.7330.751. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUabcdefg90180270360450213.94213.62213.79398.52398.91399.24398.13MIN: 201.64 / MAX: 242.71MIN: 197.2 / MAX: 235.23MIN: 197.29 / MAX: 236.32MIN: 382.1 / MAX: 404.98MIN: 386.2 / MAX: 407.29MIN: 387.9 / MAX: 408.93MIN: 379.09 / MAX: 404.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57abcdefg400M800M1200M1600M2000M19944000002001900000201030000010933000001095400000109460000010993000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

Operation: Open - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000abcdefg110K220K330K440K550K420168404858403226529101294985523560460829

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUabcdefg2040608010042.2442.0942.1974.8174.5474.8774.58MIN: 36.59 / MAX: 61.56MIN: 37.13 / MAX: 58.71MIN: 36.21 / MAX: 65.64MIN: 66.88 / MAX: 80.7MIN: 65.97 / MAX: 82.9MIN: 66.72 / MAX: 80.96MIN: 67.63 / MAX: 78.731. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUabcdefg2040608010042.4442.2042.4374.7174.5074.4374.71MIN: 36.14 / MAX: 61.98MIN: 36.84 / MAX: 61.97MIN: 36.31 / MAX: 62.36MIN: 66.12 / MAX: 81.09MIN: 66.5 / MAX: 80.32MIN: 65.68 / MAX: 83.49MIN: 66.29 / MAX: 79.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUabcdefg142842567037.8037.7937.7964.4164.6864.3164.77MIN: 33.35 / MAX: 56.45MIN: 32.97 / MAX: 53.7MIN: 33.29 / MAX: 54.88MIN: 37.44 / MAX: 73.04MIN: 38.02 / MAX: 72.52MIN: 50.85 / MAX: 70.77MIN: 55.8 / MAX: 69.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUabcdefg36912155.895.915.9010.0110.0610.0910.06MIN: 4.67 / MAX: 18.4MIN: 4.84 / MAX: 12.9MIN: 4.83 / MAX: 13.4MIN: 5.7 / MAX: 19.52MIN: 5.29 / MAX: 19.07MIN: 5.4 / MAX: 19.17MIN: 5.2 / MAX: 19.381. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000abcdefg13K26K39K52K65K40733374253507557971588245938258928

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUabcdefg2468104.174.164.166.796.806.766.79MIN: 3.39 / MAX: 10.07MIN: 3.42 / MAX: 11.2MIN: 3.43 / MAX: 10.26MIN: 3.8 / MAX: 15.48MIN: 4.04 / MAX: 15.37MIN: 4.04 / MAX: 15.47MIN: 3.79 / MAX: 15.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000abcdefg15K30K45K60K75K46145444374400171296700577053770922

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUabcdefg2468104.884.894.887.707.777.677.74MIN: 3.95 / MAX: 16.05MIN: 3.93 / MAX: 13.44MIN: 3.9 / MAX: 14.94MIN: 5.51 / MAX: 16.06MIN: 5.42 / MAX: 16.35MIN: 5.32 / MAX: 16.6MIN: 6.06 / MAX: 12.661. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: File Status - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 100000abcdefg160K320K480K640K800K515464458716729927591716613497478469487805

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512abcdefg90M180M270M360M450M4258100004296200004244000002737600002734800002733900002740700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUabcdefg0.70651.4132.11952.8263.53252.032.052.053.113.113.143.12MIN: 1.66 / MAX: 7.51MIN: 1.6 / MAX: 7MIN: 1.62 / MAX: 6.96MIN: 1.94 / MAX: 11.57MIN: 1.93 / MAX: 9.72MIN: 1.93 / MAX: 11.65MIN: 1.88 / MAX: 11.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabcdefg51015202514.2314.0314.1221.5721.4021.6521.47MIN: 11.51 / MAX: 25.86MIN: 11.59 / MAX: 26.04MIN: 11.51 / MAX: 26.04MIN: 19.5 / MAX: 24.76MIN: 19.07 / MAX: 25.3MIN: 19.48 / MAX: 24.27MIN: 17.62 / MAX: 28.131. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TiDB Community Server

Test: oltp_read_write - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 128abdefg20K40K60K80K100K857578909959727601456031059944

TiDB Community Server

Test: oltp_read_write - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 64abcdefg20K40K60K80K100K79090801837846955334538935495655301

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUabcdefg61218243016.0215.9815.8323.2023.3223.2823.42MIN: 12.5 / MAX: 33.94MIN: 12.74 / MAX: 33.34MIN: 12.38 / MAX: 32.97MIN: 15.1 / MAX: 31.6MIN: 19.49 / MAX: 30.99MIN: 15.73 / MAX: 30.77MIN: 20.46 / MAX: 32.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Apache Hadoop

Operation: Create - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000abcdefg13K26K39K52K65K43649412884393758617586175834360680

Apache Hadoop

Operation: Open - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000abcdefg120K240K360K480K600K460829469484401606578035552486578035546448

Apache Hadoop

Operation: Delete - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 100000abcdefg20K40K60K80K100K8756690827734751057089803999404102564

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeabcdefg50100150200250215.10214.07214.14151.91151.51151.78151.68

Apache Hadoop

Operation: Delete - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000abcdefg20K40K60K80K100K91075738019058010101010060496993103950

Apache Hadoop

Operation: Create - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000abcdefg16K32K48K64K80K53665521195226072134708976992072706

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesabcdefg60K120K180K240K300K248095256661270480197866195798196287197092

TiDB Community Server

Test: oltp_point_select - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 1abcdef13002600390052006500433144054471589859765954

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kabcdefg2040608010090.8191.3290.4266.9967.7267.3967.811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

Test: oltp_read_write - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 32abcdefg13K26K39K52K65K58974615205963046977467374714146993

Apache Hadoop

Operation: Delete - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 1000000abcdefg20K40K60K80K100K901148671597031112613113225110803113895

TiDB Community Server

Test: oltp_update_non_index - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 1abcdefg4008001200160020001328131213811693170816971705

TiDB Community Server

Test: oltp_read_write - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 1abcefg7001400210028003500254025102485320932183195

Apache Hadoop

Operation: Rename - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 1000000abcdefg20K40K60K80K100K73078721296682781208843608581585763

TiDB Community Server

Test: oltp_update_non_index - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 128acefg11K22K33K44K55K5110552865421384142441695

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kabcdefg1.17072.34143.51214.68285.85355.2035.1495.0494.1074.1144.1384.1431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Delete - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000abcdefg20K40K60K80K100K989329731490147111012113327111198110828

TiDB Community Server

Test: oltp_update_index - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 1acdefg30060090012001500121211891479149014831481

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pabcdefg110220330440550422.99427.69431.90526.22525.17521.52528.531. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 100000abcdefg20K40K60K80K100K75529693486715982102838227949180386

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512abcdefg6M12M18M24M30M279010002773600028227000246270002520700025199000227270001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_point_select - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 128abcdef30K60K90K120K150K159242159728149962129492129904130389

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57abcdefg300M600M900M1200M1500M11921000001214200000125480000010350000001032000000102460000010334000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_non_index - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 64abcdefg9K18K27K36K45K41281397593910634224338813447034107

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pabcdefg306090120150141.22138.34143.55118.95119.31118.49118.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pabcdefg130260390520650510.36542.61516.91604.99597.01585.37586.751. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 1000000abcdefg20K40K60K80K100K73239716797463883921840418250184810

nekRS

Input: TurboPipe Periodic

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicabcdefg2000M4000M6000M8000M10000M67677100006757360000675417000079345700007931010000795579000079649100001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pabcdefg369121512.4812.5912.6210.9110.9810.7411.021. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache Hadoop

Operation: Rename - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000abcdefg20K40K60K80K100K70522730467710182372822378163382237

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512abcdefg3M6M9M12M15M139090001402100014225000126830001236600012681000122560001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_index - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 64bcdef5K10K15K20K25K2437123324211082127121067

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabcdefg81624324035.6335.6435.6831.0530.9931.0331.06

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32abcdefg17M34M51M68M85M771810007701900076924000670540006884600068861000686780001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUdefg0.25750.5150.77251.031.28751.037491.144321.001361.12723MIN: 0.92MIN: 1.07MIN: 0.92MIN: 0.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32abcdefg300M600M900M1200M1500M11835000001190300000118480000010471000001046600000104190000010471000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512abcdefg20M40M60M80M100M1098700001080800001091400009959400097005000994410001001700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57abcdefg30M60M90M120M150M1174900001140100001185500001056500001054800001057400001048000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_point_select - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 64abdefg30K60K90K120K150K127567130802115675118657119092118549

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57abcdefg13M26M39M52M65M594010005929600057519000526650005282700052879000528540001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

nekRS

Input: Kershaw

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawabcdefg2000M4000M6000M8000M10000M11106900000112403000001082670000010318900000102640000009976450000105006000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512abcdefg12M24M36M48M60M529110005558800055165000502580005038000049977000495560001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32abcdefg8M16M24M32M40M394990003948600039453000352280003531500035271000352360001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TiDB Community Server

Test: oltp_update_index - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 128abcefg6K12K18K24K30K270872746426546246112483024574

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512abcdefg50M100M150M200M250M2160800002161500002149100001938500001960400001945000001946700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32abcdefg70M140M210M280M350M3075400003051100003067600002780300002777800002763900002774100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32abcdefg130M260M390M520M650M5942300006024700006036500005453600005451400005450200005430500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32abcdefg30M60M90M120M150M1538500001536900001536700001386000001386200001385800001384600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabcdefg0.12150.2430.36450.4860.60750.540.540.540.490.490.490.49MIN: 0.45 / MAX: 7.64MIN: 0.45 / MAX: 7.81MIN: 0.45 / MAX: 5.03MIN: 0.3 / MAX: 9.28MIN: 0.3 / MAX: 9.07MIN: 0.3 / MAX: 8.2MIN: 0.3 / MAX: 8.841. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TiDB Community Server

Test: oltp_update_non_index - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 32abdef6K12K18K24K30K2873528914262732628526695

TiDB Community Server

Test: oltp_point_select - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 32abdefg20K40K60K80K100K10462710618098149969079736896840

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUabcdefg91827364538.5038.6638.7540.4036.9837.0136.98MIN: 36.77 / MAX: 44.23MIN: 37.22 / MAX: 43.52MIN: 37.46 / MAX: 43.52MIN: 26.93 / MAX: 74.83MIN: 32.02 / MAX: 44.78MIN: 32.25 / MAX: 43.6MIN: 32.61 / MAX: 41.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg4812162016.9117.0716.8915.7215.6215.7215.69

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUabcdefg1.09352.1873.28054.3745.46754.864.854.864.514.514.504.52MIN: 4.23 / MAX: 12.81MIN: 4.25 / MAX: 12.86MIN: 4.34 / MAX: 12.27MIN: 2.98 / MAX: 13.05MIN: 2.96 / MAX: 16.06MIN: 2.98 / MAX: 13.86MIN: 2.77 / MAX: 13.571. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg1.12412.24823.37234.49645.62054.65084.64764.63484.99604.98594.98774.9787

TiDB Community Server

Test: oltp_point_select - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 16bcefg15K30K45K60K75K6751565406702507010569923

TiDB Community Server

Test: oltp_update_index - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 32abcdeg4K8K12K16K20K183611781717565176121711717135

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcdefg80160240320400347.66347.22347.37325.88325.74324.96325.51

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUdefg0.13780.27560.41340.55120.6890.6039500.5757940.6008340.612320MIN: 0.53MIN: 0.52MIN: 0.53MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_read_write - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 16abcdefg8K16K24K32K40K38331369503736836480367843612536088

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabcdefg306090120150118.75118.95118.78112.25112.06112.41112.48

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUdefg0.28810.57620.86431.15241.44051.257581.280431.206531.27918MIN: 1.21MIN: 1.24MIN: 1.18MIN: 1.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUabcdefg4812162016.2616.0216.0215.3615.3615.3815.37MIN: 14.71 / MAX: 28.14MIN: 14.41 / MAX: 30.55MIN: 14.63 / MAX: 33.79MIN: 8.08 / MAX: 24.34MIN: 8.02 / MAX: 23.81MIN: 7.99 / MAX: 24MIN: 7.99 / MAX: 23.981. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57abcdefg80M160M240M320M400M3694300003669300003669900003633100003579900003504500003578100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabcdefg306090120150150.59150.61145.26143.76144.10143.69144.11

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamabcdefg110220330440550485.72487.36507.48493.60494.26494.22495.60

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUabcdefg2468108.288.278.247.937.967.977.96MIN: 7.44 / MAX: 23.35MIN: 7.37 / MAX: 25.18MIN: 7.62 / MAX: 23.32MIN: 4.2 / MAX: 16.92MIN: 4.19 / MAX: 16.59MIN: 4.37 / MAX: 16.86MIN: 4.19 / MAX: 14.21. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUabcdefg71421283530.7231.0030.8930.0230.1029.9529.72MIN: 29.51 / MAX: 35.07MIN: 29.59 / MAX: 36.33MIN: 29.48 / MAX: 36.29MIN: 18.78 / MAX: 38.72MIN: 22.61 / MAX: 39.15MIN: 19.01 / MAX: 38.08MIN: 19.46 / MAX: 38.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57abcdefg40M80M120M160M200M1962200001965900001945100001889300001912300001898800001907500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kabcdefg4080120160200163.46166.38163.06163.19162.61161.85160.321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TiDB Community Server

Test: oltp_update_non_index - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 16abdeg4K8K12K16K20K1809518068185631855718735

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57abcdefg150M300M450M600M750M6997400006927600006749300006891500006929200006933400006820700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kabcdefg4080120160200163.01166.69161.50161.85162.05160.80161.321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUdefg0.23880.47760.71640.95521.1941.028751.054251.061441.04567MIN: 0.96MIN: 0.97MIN: 0.98MIN: 0.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabcdefg0.07880.15760.23640.31520.3940.340.340.340.350.350.350.35MIN: 0.29 / MAX: 7.33MIN: 0.29 / MAX: 10.87MIN: 0.29 / MAX: 7.09MIN: 0.23 / MAX: 9.09MIN: 0.23 / MAX: 8.84MIN: 0.23 / MAX: 9.15MIN: 0.23 / MAX: 8.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUdefg0.57721.15441.73162.30882.8862.494082.565222.497142.51441MIN: 2.3MIN: 2.32MIN: 2.26MIN: 2.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Kripke

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6defg50M100M150M200M250M2409945002362439002365910002371757001. (CXX) g++ options: -O3 -fopenmp -ldl

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400defg2040608010098.9899.4297.9997.531. (CXX) g++ options: -O3 -fopenmp

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcdefg2040608010074.3274.5674.5073.3173.2673.2273.19

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUdefg2004006008001000849.16851.66849.34837.60MIN: 806.44MIN: 809.45MIN: 805.8MIN: 796.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUdefg2004006008001000838.52849.71851.49848.03MIN: 796.3MIN: 805.98MIN: 807.97MIN: 807.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUdefg0.35390.70781.06171.41561.76951.558241.549111.572821.55118MIN: 1.51MIN: 1.51MIN: 1.53MIN: 1.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUdefg0.1480.2960.4440.5920.740.6522590.6576100.6531820.647700MIN: 0.57MIN: 0.57MIN: 0.57MIN: 0.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objdefg61218243023.3523.5323.5023.71MIN: 23.26 / MAX: 23.57MIN: 23.43 / MAX: 23.73MIN: 23.4 / MAX: 23.74MIN: 23.61 / MAX: 23.93

TiDB Community Server

Test: oltp_update_index - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 16acdefg3K6K9K12K15K125581268112622125671269212627

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crowndefg51015202521.8921.9921.7721.83MIN: 21.74 / MAX: 22.23MIN: 21.84 / MAX: 22.32MIN: 21.63 / MAX: 22.18MIN: 21.69 / MAX: 22.17

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUdefg0.14260.28520.42780.57040.7130.6282360.6339750.6303250.629108MIN: 0.6MIN: 0.6MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUdefg0.19140.38280.57420.76560.9570.8478050.8444340.8506910.843492MIN: 0.83MIN: 0.83MIN: 0.83MIN: 0.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg20406080100109.80109.23109.58110.11109.97110.00109.90

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUdefg2004006008001000847.38841.08845.31847.42MIN: 806.33MIN: 798.46MIN: 803.78MIN: 806.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUdefg0.86491.72982.59473.45964.32453.815763.844213.818233.82381MIN: 3.26MIN: 3.27MIN: 3.25MIN: 3.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcdefg112233445549.3749.1149.0249.0949.0149.0749.03

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcdefg81624324033.3433.3833.4633.2233.2633.2833.37

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUdefg0.480.961.441.922.42.133322.125702.130622.11813MIN: 2MIN: 2.01MIN: 1.97MIN: 1.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcdefg112233445549.0148.9749.2148.8749.0649.0748.98

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUdefg4008001200160020001642.511639.361636.441631.99MIN: 1593.16MIN: 1581.93MIN: 1585.81MIN: 1581.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragondefg71421283527.7427.8327.8327.91MIN: 27.64 / MAX: 27.98MIN: 27.72 / MAX: 28.1MIN: 27.73 / MAX: 28.13MIN: 27.81 / MAX: 28.17

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240defg0.37280.74561.11841.49121.8641.6571.6541.6571.6481. (CXX) g++ options: -O3 -fopenmp

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragondefg61218243024.8524.8324.8924.96MIN: 24.78 / MAX: 25MIN: 24.76 / MAX: 24.96MIN: 24.81 / MAX: 25.06MIN: 24.9 / MAX: 25.13

OpenVKL

Benchmark: vklBenchmarkCPU Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalardefg4080120160200191190191191MIN: 13 / MAX: 3471MIN: 13 / MAX: 3484MIN: 13 / MAX: 3484MIN: 13 / MAX: 3483

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcdefg130260390520650605.76606.67605.88606.58606.76606.79608.72

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcdefg130260390520650605.04605.73605.92606.10607.91607.82607.16

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUdefg0.30190.60380.90571.20761.50951.337891.338611.341831.33564MIN: 1.31MIN: 1.31MIN: 1.31MIN: 1.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crowndefg51015202522.3922.3422.4422.42MIN: 22.2 / MAX: 22.85MIN: 22.15 / MAX: 22.75MIN: 22.25 / MAX: 22.78MIN: 22.22 / MAX: 22.85

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objdefg51015202522.3522.2922.2722.26MIN: 22.28 / MAX: 22.5MIN: 22.22 / MAX: 22.46MIN: 22.2 / MAX: 22.44MIN: 22.18 / MAX: 22.43

OpenVKL

Benchmark: vklBenchmarkCPU ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCdefg110220330440550487487488489MIN: 36 / MAX: 6949MIN: 36 / MAX: 6956MIN: 36 / MAX: 6952MIN: 36 / MAX: 6969

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200defg91827364538.1138.0738.0237.951. (CXX) g++ options: -O3 -fopenmp

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUdefg4008001200160020001641.921641.001636.761637.37MIN: 1584.81MIN: 1595.55MIN: 1585.98MIN: 1584.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUdefg0.68931.37862.06792.75723.44653.059913.063703.056743.05458MIN: 2.96MIN: 2.97MIN: 2.97MIN: 2.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUdefg0.43150.8631.29451.7262.15751.913741.917811.914221.91274MIN: 1.88MIN: 1.88MIN: 1.88MIN: 1.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabcdefg20406080100111.01110.92111.03111.11111.09110.89110.98

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUdefg0.76151.5232.28453.0463.80753.377823.384363.379563.38156MIN: 3.33MIN: 3.33MIN: 3.33MIN: 3.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUdefg4008001200160020001643.991643.971642.351641.40MIN: 1588.03MIN: 1590.89MIN: 1586.17MIN: 1589.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlydefg0.07650.1530.22950.3060.38250.340.340.340.34

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlydefg0.1620.3240.4860.6480.810.720.720.720.72

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlydefg0.1620.3240.4860.6480.810.720.720.720.72


Phoronix Test Suite v10.8.4