extra tests2

Tests for a future article. AMD EPYC 9124 16-Core testing with a Supermicro H13SSW (1.1 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2310228-NE-EXTRATEST37&grw&sro&rro.

extra tests2ProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen Resolutionabcdefg2 x AMD EPYC 9254 24-Core @ 2.90GHz (48 Cores / 96 Threads)Supermicro H13DSH (1.5 BIOS)24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbAlmaLinux 9.25.14.0-284.25.1.el9_2.x86_64 (x86_64)GCC 11.3.1 20221121ext41024x768AMD EPYC 9124 16-Core @ 3.00GHz (16 Cores / 32 Threads)Supermicro H13SSW (1.1 BIOS)12 x 64 GB DDR5-4800MT/s HMCG94MEBRA123NOpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- a: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e- d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- e: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- f: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111- g: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Java Details- OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)Python Details- Python 3.9.16Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

extra tests2brl-cad: VGR Performance Metricnekrs: TurboPipe Periodicspecfem3d: Water-layered Halfspacenekrs: Kershawspecfem3d: Tomographic Modelspecfem3d: Homogeneous Halfspacespecfem3d: Mount St. Helensspecfem3d: Layered Halfspaceremhos: Sample Remap Exampledeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamtidb: oltp_point_select - 1deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - f32 - CPUtidb: oltp_point_select - 128onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUtidb: oltp_update_non_index - 64onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUtidb: oltp_update_non_index - 16tidb: oltp_read_write - 32onednn: Deconvolution Batch shapes_1d - f32 - CPUtidb: oltp_update_index - 128onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUtidb: oltp_read_write - 64onednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUtidb: oltp_update_index - 32tidb: oltp_update_index - 64tidb: oltp_point_select - 64tidb: oltp_update_index - 16tidb: oltp_point_select - 32tidb: oltp_point_select - 16tidb: oltp_read_write - 128onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUtidb: oltp_update_non_index - 1tidb: oltp_read_write - 1tidb: oltp_update_non_index - 32tidb: oltp_read_write - 16tidb: oltp_update_non_index - 128tidb: oltp_update_index - 1onednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUkripke: build-linux-kernel: defconfigsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer - Crownembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Crownoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyopenvkl: vklBenchmarkCPU Scalaropenvkl: vklBenchmarkCPU ISPCospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeliquid-dsp: 1 - 256 - 32liquid-dsp: 1 - 256 - 57easywave: e2Asean Grid + BengkuluSept2007 Source - 240liquid-dsp: 2 - 256 - 32liquid-dsp: 2 - 256 - 57liquid-dsp: 4 - 256 - 32liquid-dsp: 4 - 256 - 57liquid-dsp: 8 - 256 - 32liquid-dsp: 8 - 256 - 57liquid-dsp: 1 - 256 - 512liquid-dsp: 16 - 256 - 57easywave: e2Asean Grid + BengkuluSept2007 Source - 1200liquid-dsp: 16 - 256 - 32liquid-dsp: 2 - 256 - 512liquid-dsp: 32 - 256 - 32liquid-dsp: 32 - 256 - 57liquid-dsp: 4 - 256 - 512liquid-dsp: 64 - 256 - 32liquid-dsp: 64 - 256 - 57liquid-dsp: 8 - 256 - 512easywave: e2Asean Grid + BengkuluSept2007 Source - 2400liquid-dsp: 96 - 256 - 32liquid-dsp: 96 - 256 - 57liquid-dsp: 16 - 256 - 512liquid-dsp: 32 - 256 - 512liquid-dsp: 64 - 256 - 512liquid-dsp: 96 - 256 - 512hadoop: Open - 50 - 100000hadoop: Open - 100 - 100000hadoop: Open - 50 - 1000000hadoop: Create - 50 - 100000hadoop: Delete - 50 - 100000hadoop: Open - 100 - 1000000hadoop: Rename - 50 - 100000hadoop: Create - 100 - 100000hadoop: Create - 50 - 1000000hadoop: Delete - 100 - 100000hadoop: Delete - 50 - 1000000hadoop: Rename - 100 - 100000hadoop: Rename - 50 - 1000000hadoop: Create - 100 - 1000000hadoop: Delete - 100 - 1000000hadoop: Rename - 100 - 1000000hadoop: File Status - 50 - 100000hadoop: File Status - 100 - 100000hadoop: File Status - 50 - 1000000hadoop: File Status - 100 - 1000000cassandra: Writesabcdefg772162676771000026.9850209081110690000012.31294665215.1051177311.02476577526.88598380416.34639.4951605.03881417.070616.9074672.463535.628201.3925118.7509485.672549.36535137.01144.6508215.6383111.013149.3258485.7175489.120349.0092218.1464109.8027322.250574.321168.5988347.6612718.9189433133.3412158.924150.589739.4391605.758159242412811809558974270877909018361127567125581046278575713282540287353833151105121230.41393.6282.5542.44283.9742.242033.175.8956.01213.945882.912.03748.4416.022873.244.172945.2616.269837.584.86842.9114.23317.2237.85776.948.282454.094.881560.0330.7286884.640.541244.6938.5120606.380.3427.3545.20390.811163.459163.01312.477141.219422.994510.36126.266.4233.22254.8880.5454.901756.087160.144953.573367.337856.48531.831.840.8615.98615.9528215.09614.236913.873916.3468394990005940100077181000117490000153850000196220000307540000369430000139090006997400005942300002790100011835000001192100000529110002207700000199440000010987000030058000002559800000216080000425810000622560000711640000460829420168112612643649910752153327052240733536658756698932755297323946145901147307852910151546421739131886792248095768517675736000029.4607611971124030000012.10059594714.4605869811.31849570928.6521086316.79139.4681605.73071403.06517.0669672.373435.6422201.2528118.9482488.126449.10825138.83414.6476215.9254110.915249.1666487.3599489.446448.9703219.5263109.2332321.182974.555368.6617347.2189717.9693440533.3798159.0596150.605539.4539606.669315972839759180686152027464801831781724371130802106180675158909913122510289143695030.44393.23284.2242.2284.9942.092028.015.9156.06213.625836.272.05750.4915.982880.584.162986.4616.029849.074.85854.5114.03317.2837.795780.448.272450.264.891546.023187359.230.541239.6738.66120728.220.3427.2415.14991.322166.378166.69212.591138.338427.686542.61126.2466.6433.17255.380.7655.392556.455159.912453.813567.195156.69021.831.840.8615.978515.9888214.07414.178313.766616.436539486000592960007701900011401000015369000019659000030511000036693000014021000692760000602470000277360001190300000121420000055588000221210000020019000001080800002995400000257110000021615000042962000061095000071814000046948440485810204084128873801173822730463742552119908279731469348716794443786715721298620694587161941748161970256661762529675417000027.0602350791082670000012.04091787714.80827336511.3273597727.49085015716.24339.4464605.91831418.904116.8863671.255435.6825201.5402118.7801489.110649.01735153.66444.6348215.647111.02547.1535507.4786487.052249.2055218.5162109.5847321.508274.503168.6287347.3728716.1404447133.4637164.6131145.256239.4183605.87651499623910659630265467846917565233241268165406138124853736852865118930.43393.37282.6742.43284.3142.192029.795.956.02213.795840.532.05757.3815.832881.144.162987.3316.029845.274.86849.314.12317.3337.795802.658.242455.514.881551.6330.8986789.80.541237.2938.75123484.280.3427.4085.04990.417163.055161.49512.617143.545431.895516.90626.1266.7233.03254.7280.4155.403756.807859.792953.692767.503856.93271.831.820.8715.987215.9778214.13614.139913.831716.5353945300057519000769240001185500001536700001945100003067600003669900001422500067493000060365000028227000118480000012548000005516500022068000002010300000109140000299980000025649000002149100004244000006226300007150300004016064032266839954393790580185874771013507552260734759014767159746384400197031668276578957299272842521893939270480298064793457000062.4417495851031890000027.33098558835.57168490826.73641411871.61429432730.76113.0694606.101508.08715.7246257.272831.052571.137112.2499162.850249.08631599.20794.99671.9189111.108116.1648493.596163.555948.872972.455110.1114108.90973.305824.4833325.8763240.5529589833.224255.6132143.764313.1312606.57732.133321294921.55824342241.3378918563469773.815760.6282363.059913.377820.8478051.913742.494080.6522591.03749553341.257580.603951.028751641.9217612211081156751262298149597271642.511643.99169326273364801479838.516849.163847.3810.47761.59107.0274.71106.974.81797.6410.0120.03398.522564.783.11344.6723.21175.676.791039.6115.363540.884.51370.5721.57124.1264.412013.777.931036.997.7532.5930.0232002.620.49395.6640.444958.070.3524099450055.1734.10766.988163.189161.85410.91118.946526.216604.98672182.9990.03670.87224.1521.481122.58524.687222.261428.364323.873324.84522.351721.894327.743823.353122.39220.720.720.340.720.720.341914875.574695.57001151.9055.607475.453296.5874535228000526650001.657670540001056500001386000001889300002780300003633100001268300068915000038.105545360000246270001047100000103500000050258000105950000010933000009959400098.9810652000001120800000193850000273760000282920000286250000578035529101278319586171010101248439823725797172134105708111012821028392171296112613812086329115917161818182600601197866296125793101000062.3251468281026400000027.45982130835.03013488926.79914344670.18902850630.84512.9433607.913511.409815.6245257.89430.986771.2727112.0574163.136149.00941599.15434.985971.9146111.088816.1392494.2575162.929849.059872.6571109.9654109.093873.260924.4725325.7416240.2349597633.262555.4634144.101313.1187606.7552.12571299041.54911338811.3386118557467373.84421246110.6339753.06373.384360.8444341.917812.565220.657611.14432538931.280430.5757941.0542516411711721271118657125679690770250601451639.361643.97170832092628536784421381490849.712851.659841.07810.47761.16107.2774.5107.2474.54793.7510.0620398.912562.543.11342.8123.321174.66.81039.8215.363544.184.51373.6421.4123.6164.682007.537.961028.647.77530.9930.132032.060.49432.3236.9844933.270.3523624390055.0934.11467.721162.608162.05110.984119.307525.173597.01171.44182.5690.31670.64224.121.435722.569424.734322.157728.314123.939324.828222.291121.990927.829323.528322.34220.720.720.340.720.720.341904875.541075.56353151.5065.62045.461536.582735315000528270001.654688460001054800001386200001912300002777800003579900001236600069292000038.067545140000252070001046600000103200000050380000105750000010954000009700500099.415106510000011178000001960400002734800002818300002858800005524862949852510045861710060412048198223758824708979803911332783822840417005711322584360389105613497320924235627195798295603795579000061.281769124997645000026.97375739535.53507300126.87316845570.54225590530.72513.0853607.8171508.208815.721257.504631.029271.044112.413162.929449.07121600.52754.987771.9401110.885216.1307494.2211162.897649.071472.5736110.0001109.0973.216324.5238324.9568240.1642595433.275155.5428143.692213.0934606.78522.130621303891.57282344701.34183471413.81823248300.6303253.056743.379560.8506911.914222.497140.6531821.00136549561.206530.6008341.061441636.7621067119092126929736870105603101636.441642.35169732182669536125414241483851.494849.344845.30810.48760.57107.3974.43106.7674.87791.7410.0920.01399.242539.973.14343.4923.281180.856.761038.4715.383548.784.5369.2621.65124.364.312004.767.971041.877.67533.7429.9531951.640.49431.9437.0144968.430.3523659100055.1484.13867.393161.847160.79810.736118.486521.518585.36871.96181.790.26667.87223.9521.591322.656624.704722.14928.323723.935424.88722.267621.774627.82623.504222.44070.720.720.340.720.720.341914885.57325.55581151.785.614545.452276.5956335271000528790001.657688610001057400001385800001898800002763900003504500001268100069334000038.015545020000251990001041900000102460000049977000105710000010946000009944100097.98710653000001120500000194500000273390000283030000285920000578035523560122100158343969931303781816335938269920994041111987949182501705371108038581570922047846917953321964637196287295522796491000062.8109243761050060000027.74647516235.37860002127.69663137169.95560916530.7513.073607.1628509.13915.6892257.280831.057470.9275112.4779162.993749.02591602.52214.978771.9003110.975916.0672495.6033163.228248.983772.6879109.9037109.219173.186724.4607325.5075239.517633.367455.4264144.105313.0596608.71632.118131.55118341071.3356418735469933.82381245740.6291083.054583.381560.8434921.912742.514410.64771.12723553011.279180.612321.045671637.3717135118549126279684069923599441631.991641.41705319536088416951481848.032837.595847.41710.48759.92107.0474.71107.2474.58793.910.0620.05398.132557.663.12341.3623.421175.586.791039.3715.373533.644.52372.2621.47123.4164.772006.097.961031.67.74538.0129.7232008.030.49432.236.9845097.990.3523717570055.1724.14367.811160.322161.32411.016118.481528.533586.74872.01183.2990.63669.09224.1221.584722.774524.819322.190128.479323.879624.961922.255921.830527.9123.709122.42150.720.720.340.720.720.341914895.575535.56539151.6815.622785.477256.6008535236000528540001.648686780001048000001384600001907500002774100003578100001225600068207000037.955430500002272700010471000001033400000495560001056200000109930000010017000097.529106570000011182000001946700002740700002817300002865300005464484608296540226068010395011074208223758928727061025641108288038684810709221138958576356179848780520366602049180197092OpenBenchmarking.org

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricgfedcba170K340K510K680K850K2955222956032961252980647625297685177721621. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

nekRS

Input: TurboPipe Periodic

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicgfedcba2000M4000M6000M8000M10000M79649100007955790000793101000079345700006754170000675736000067677100001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

SPECFEM3D

Model: Water-layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspacegfedcba142842567062.8161.2862.3362.4427.0629.4626.991. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

nekRS

Input: Kershaw

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawgfedcba2000M4000M6000M8000M10000M10500600000997645000010264000000103189000001082670000011240300000111069000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

SPECFEM3D

Model: Tomographic Model

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelgfedcba71421283527.7526.9727.4627.3312.0412.1012.311. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

SPECFEM3D

Model: Homogeneous Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacegfedcba81624324035.3835.5435.0335.5714.8114.4615.111. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

SPECFEM3D

Model: Mount St. Helens

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensgfedcba71421283527.7026.8726.8026.7411.3311.3211.021. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

SPECFEM3D

Model: Layered Halfspace

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspacegfedcba163248648069.9670.5470.1971.6127.4928.6526.891. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

Remhos

Test: Sample Remap Example

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Examplegfedcba71421283530.7530.7330.8530.7616.2416.7916.351. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamgfedcba91827364513.0713.0912.9413.0739.4539.4739.50

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamgfedcba130260390520650607.16607.82607.91606.10605.92605.73605.04

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba30060090012001500509.14508.21511.41508.091418.901403.071417.07

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba4812162015.6915.7215.6215.7216.8917.0716.91

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamgfedcba150300450600750257.28257.50257.89257.27671.26672.37672.46

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamgfedcba81624324031.0631.0330.9931.0535.6835.6435.63

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamgfedcba408012016020070.9371.0471.2771.14201.54201.25201.39

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamgfedcba306090120150112.48112.41112.06112.25118.78118.95118.75

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamgfedcba110220330440550162.99162.93163.14162.85489.11488.13485.67

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamgfedcba112233445549.0349.0749.0149.0949.0249.1149.37

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba110022003300440055001602.521600.531599.151599.215153.665138.835137.01

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba1.12412.24823.37234.49645.62054.97874.98774.98594.99604.63484.64764.6508

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamgfedcba5010015020025071.9071.9471.9171.92215.65215.93215.64

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamgfedcba20406080100110.98110.89111.09111.11111.03110.92111.01

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamgfedcba112233445516.0716.1316.1416.1647.1549.1749.33

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamgfedcba110220330440550495.60494.22494.26493.60507.48487.36485.72

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamgfedcba110220330440550163.23162.90162.93163.56487.05489.45489.12

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamgfedcba112233445548.9849.0749.0648.8749.2148.9749.01

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba5010015020025072.6972.5772.6672.46218.52219.53218.15

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba20406080100109.90110.00109.97110.11109.58109.23109.80

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamgfedcba70140210280350109.22109.09109.09108.91321.51321.18322.25

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamgfedcba2040608010073.1973.2273.2673.3174.5074.5674.32

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamgfedcba153045607524.4624.5224.4724.4868.6368.6668.60

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamgfedcba80160240320400325.51324.96325.74325.88347.37347.22347.66

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba160320480640800239.52240.16240.23240.55716.14717.97718.92

TiDB Community Server

Test: oltp_point_select - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 1fedcba13002600390052006500595459765898447144054331

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamgfedcba81624324033.3733.2833.2633.2233.4633.3833.34

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamgfedcba408012016020055.4355.5455.4655.61164.61159.06158.92

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamgfedcba306090120150144.11143.69144.10143.76145.26150.61150.59

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamgfedcba91827364513.0613.0913.1213.1339.4239.4539.44

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamgfedcba130260390520650608.72606.79606.76606.58605.88606.67605.76

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUgfed0.480.961.441.922.42.118132.130622.125702.13332MIN: 1.99MIN: 1.97MIN: 2.01MIN: 21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_point_select - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 128fedcba30K60K90K120K150K130389129904129492149962159728159242

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUgfed0.35390.70781.06171.41561.76951.551181.572821.549111.55824MIN: 1.52MIN: 1.53MIN: 1.51MIN: 1.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_update_non_index - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 64gfedcba9K18K27K36K45K34107344703388134224391063975941281

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUgfed0.30190.60380.90571.20761.50951.335641.341831.338611.33789MIN: 1.31MIN: 1.31MIN: 1.31MIN: 1.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_update_non_index - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 16gedba4K8K12K16K20K1873518557185631806818095

TiDB Community Server

Test: oltp_read_write - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 32gfedcba13K26K39K52K65K46993471414673746977596306152058974

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUgfed0.86491.72982.59473.45964.32453.823813.818233.844213.81576MIN: 3.29MIN: 3.25MIN: 3.27MIN: 3.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_update_index - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 128gfecba6K12K18K24K30K245742483024611265462746427087

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUgfed0.14260.28520.42780.57040.7130.6291080.6303250.6339750.628236MIN: 0.6MIN: 0.6MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUgfed0.68931.37862.06792.75723.44653.054583.056743.063703.05991MIN: 2.97MIN: 2.97MIN: 2.97MIN: 2.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUgfed0.76151.5232.28453.0463.80753.381563.379563.384363.37782MIN: 3.33MIN: 3.33MIN: 3.33MIN: 3.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUgfed0.19140.38280.57420.76560.9570.8434920.8506910.8444340.847805MIN: 0.83MIN: 0.83MIN: 0.83MIN: 0.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUgfed0.43150.8631.29451.7262.15751.912741.914221.917811.91374MIN: 1.88MIN: 1.88MIN: 1.88MIN: 1.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUgfed0.57721.15441.73162.30882.8862.514412.497142.565222.49408MIN: 2.3MIN: 2.26MIN: 2.32MIN: 2.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUgfed0.1480.2960.4440.5920.740.6477000.6531820.6576100.652259MIN: 0.57MIN: 0.57MIN: 0.57MIN: 0.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUgfed0.25750.5150.77251.031.28751.127231.001361.144321.03749MIN: 0.93MIN: 0.92MIN: 1.07MIN: 0.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_read_write - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 64gfedcba20K40K60K80K100K55301549565389355334784698018379090

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUgfed0.28810.57620.86431.15241.44051.279181.206531.280431.25758MIN: 1.24MIN: 1.18MIN: 1.24MIN: 1.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUgfed0.13780.27560.41340.55120.6890.6123200.6008340.5757940.603950MIN: 0.53MIN: 0.53MIN: 0.52MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUgfed0.23880.47760.71640.95521.1941.045671.061441.054251.02875MIN: 0.98MIN: 0.98MIN: 0.97MIN: 0.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUgfed4008001200160020001637.371636.761641.001641.92MIN: 1584.58MIN: 1585.98MIN: 1595.55MIN: 1584.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_update_index - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 32gedcba4K8K12K16K20K171351711717612175651781718361

TiDB Community Server

Test: oltp_update_index - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 64fedcb5K10K15K20K25K2106721271211082332424371

TiDB Community Server

Test: oltp_point_select - Threads: 64

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 64gfedba30K60K90K120K150K118549119092118657115675130802127567

TiDB Community Server

Test: oltp_update_index - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 16gfedca3K6K9K12K15K126271269212567126221268112558

TiDB Community Server

Test: oltp_point_select - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 32gfedba20K40K60K80K100K96840973689690798149106180104627

TiDB Community Server

Test: oltp_point_select - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_point_select - Threads: 16gfecb15K30K45K60K75K6992370105702506540667515

TiDB Community Server

Test: oltp_read_write - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 128gfedba20K40K60K80K100K599446031060145597278909985757

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUgfed4008001200160020001631.991636.441639.361642.51MIN: 1581.62MIN: 1585.81MIN: 1581.93MIN: 1593.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUgfed4008001200160020001641.401642.351643.971643.99MIN: 1589.91MIN: 1586.17MIN: 1590.89MIN: 1588.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TiDB Community Server

Test: oltp_update_non_index - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 1gfedcba4008001200160020001705169717081693138113121328

TiDB Community Server

Test: oltp_read_write - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 1gfecba7001400210028003500319532183209248525102540

TiDB Community Server

Test: oltp_update_non_index - Threads: 32

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 32fedba6K12K18K24K30K2669526285262732891428735

TiDB Community Server

Test: oltp_read_write - Threads: 16

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_read_write - Threads: 16gfedcba8K16K24K32K40K36088361253678436480373683695038331

TiDB Community Server

Test: oltp_update_non_index - Threads: 128

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_non_index - Threads: 128gfeca11K22K33K44K55K4169541424421385286551105

TiDB Community Server

Test: oltp_update_index - Threads: 1

OpenBenchmarking.orgQueries Per Second, More Is BetterTiDB Community Server 7.3Test: oltp_update_index - Threads: 1gfedca30060090012001500148114831490147911891212

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUgfed2004006008001000848.03851.49849.71838.52MIN: 807.34MIN: 807.97MIN: 805.98MIN: 796.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUgfed2004006008001000837.60849.34851.66849.16MIN: 796.61MIN: 805.8MIN: 809.45MIN: 806.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUgfed2004006008001000847.42845.31841.08847.38MIN: 806.72MIN: 803.78MIN: 798.46MIN: 806.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUgfedcba71421283510.4810.4810.4710.4730.4330.4430.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUgfedcba160320480640800759.92760.57761.16761.59393.37393.23393.60MIN: 737.63 / MAX: 771.07MIN: 741.4 / MAX: 770.88MIN: 741.99 / MAX: 776.56MIN: 738.34 / MAX: 772.36MIN: 362.57 / MAX: 433.51MIN: 360.87 / MAX: 433.13MIN: 363.29 / MAX: 431.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUgfedcba60120180240300107.04107.39107.27107.02282.67284.22282.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUgfedcba2040608010074.7174.4374.5074.7142.4342.2042.44MIN: 66.29 / MAX: 79.68MIN: 65.68 / MAX: 83.49MIN: 66.5 / MAX: 80.32MIN: 66.12 / MAX: 81.09MIN: 36.31 / MAX: 62.36MIN: 36.84 / MAX: 61.97MIN: 36.14 / MAX: 61.981. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUgfedcba60120180240300107.24106.76107.24106.90284.31284.99283.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUgfedcba2040608010074.5874.8774.5474.8142.1942.0942.24MIN: 67.63 / MAX: 78.73MIN: 66.72 / MAX: 80.96MIN: 65.97 / MAX: 82.9MIN: 66.88 / MAX: 80.7MIN: 36.21 / MAX: 65.64MIN: 37.13 / MAX: 58.71MIN: 36.59 / MAX: 61.561. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUgfedcba400800120016002000793.90791.74793.75797.642029.792028.012033.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUgfedcba369121510.0610.0910.0610.015.905.915.89MIN: 5.2 / MAX: 19.38MIN: 5.4 / MAX: 19.17MIN: 5.29 / MAX: 19.07MIN: 5.7 / MAX: 19.52MIN: 4.83 / MAX: 13.4MIN: 4.84 / MAX: 12.9MIN: 4.67 / MAX: 18.41. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUgfedcba132639526520.0520.0120.0020.0356.0256.0656.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUgfedcba90180270360450398.13399.24398.91398.52213.79213.62213.94MIN: 379.09 / MAX: 404.71MIN: 387.9 / MAX: 408.93MIN: 386.2 / MAX: 407.29MIN: 382.1 / MAX: 404.98MIN: 197.29 / MAX: 236.32MIN: 197.2 / MAX: 235.23MIN: 201.64 / MAX: 242.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUgfedcba130026003900520065002557.662539.972562.542564.785840.535836.275882.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUgfedcba0.70651.4132.11952.8263.53253.123.143.113.112.052.052.03MIN: 1.88 / MAX: 11.92MIN: 1.93 / MAX: 11.65MIN: 1.93 / MAX: 9.72MIN: 1.94 / MAX: 11.57MIN: 1.62 / MAX: 6.96MIN: 1.6 / MAX: 7MIN: 1.66 / MAX: 7.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUgfedcba160320480640800341.36343.49342.81344.67757.38750.49748.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUgfedcba61218243023.4223.2823.3223.2015.8315.9816.02MIN: 20.46 / MAX: 32.43MIN: 15.73 / MAX: 30.77MIN: 19.49 / MAX: 30.99MIN: 15.1 / MAX: 31.6MIN: 12.38 / MAX: 32.97MIN: 12.74 / MAX: 33.34MIN: 12.5 / MAX: 33.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUgfedcba60012001800240030001175.581180.851174.601175.672881.142880.582873.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUgfedcba2468106.796.766.806.794.164.164.17MIN: 3.79 / MAX: 15.41MIN: 4.04 / MAX: 15.47MIN: 4.04 / MAX: 15.37MIN: 3.8 / MAX: 15.48MIN: 3.43 / MAX: 10.26MIN: 3.42 / MAX: 11.2MIN: 3.39 / MAX: 10.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUgfedcba60012001800240030001039.371038.471039.821039.612987.332986.462945.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUgfedcba4812162015.3715.3815.3615.3616.0216.0216.26MIN: 7.99 / MAX: 23.98MIN: 7.99 / MAX: 24MIN: 8.02 / MAX: 23.81MIN: 8.08 / MAX: 24.34MIN: 14.63 / MAX: 33.79MIN: 14.41 / MAX: 30.55MIN: 14.71 / MAX: 28.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUgfedcba2K4K6K8K10K3533.643548.783544.183540.889845.279849.079837.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUgfedcba1.09352.1873.28054.3745.46754.524.504.514.514.864.854.86MIN: 2.77 / MAX: 13.57MIN: 2.98 / MAX: 13.86MIN: 2.96 / MAX: 16.06MIN: 2.98 / MAX: 13.05MIN: 4.34 / MAX: 12.27MIN: 4.25 / MAX: 12.86MIN: 4.23 / MAX: 12.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUgfedcba2004006008001000372.26369.26373.64370.57849.30854.51842.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUgfedcba51015202521.4721.6521.4021.5714.1214.0314.23MIN: 17.62 / MAX: 28.13MIN: 19.48 / MAX: 24.27MIN: 19.07 / MAX: 25.3MIN: 19.5 / MAX: 24.76MIN: 11.51 / MAX: 26.04MIN: 11.59 / MAX: 26.04MIN: 11.51 / MAX: 25.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUgfedcba70140210280350123.41124.30123.61124.12317.33317.28317.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUgfedcba142842567064.7764.3164.6864.4137.7937.7937.80MIN: 55.8 / MAX: 69.46MIN: 50.85 / MAX: 70.77MIN: 38.02 / MAX: 72.52MIN: 37.44 / MAX: 73.04MIN: 33.29 / MAX: 54.88MIN: 32.97 / MAX: 53.7MIN: 33.35 / MAX: 56.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUgfedcba120024003600480060002006.092004.762007.532013.775802.655780.445776.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUgfedcba2468107.967.977.967.938.248.278.28MIN: 4.19 / MAX: 14.2MIN: 4.37 / MAX: 16.86MIN: 4.19 / MAX: 16.59MIN: 4.2 / MAX: 16.92MIN: 7.62 / MAX: 23.32MIN: 7.37 / MAX: 25.18MIN: 7.44 / MAX: 23.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUgfedcba50010001500200025001031.601041.871028.641036.992455.512450.262454.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUgfedcba2468107.747.677.777.704.884.894.88MIN: 6.06 / MAX: 12.66MIN: 5.32 / MAX: 16.6MIN: 5.42 / MAX: 16.35MIN: 5.51 / MAX: 16.06MIN: 3.9 / MAX: 14.94MIN: 3.93 / MAX: 13.44MIN: 3.95 / MAX: 16.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUgfedcba30060090012001500538.01533.74530.99532.591551.631546.021560.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUgfedcba71421283529.7229.9530.1030.0230.8931.0030.72MIN: 19.46 / MAX: 38.99MIN: 19.01 / MAX: 38.08MIN: 22.61 / MAX: 39.15MIN: 18.78 / MAX: 38.72MIN: 29.48 / MAX: 36.29MIN: 29.59 / MAX: 36.33MIN: 29.51 / MAX: 35.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUgfedcba20K40K60K80K100K32008.0331951.6432032.0632002.6286789.8087359.2386884.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUgfedcba0.12150.2430.36450.4860.60750.490.490.490.490.540.540.54MIN: 0.3 / MAX: 8.84MIN: 0.3 / MAX: 8.2MIN: 0.3 / MAX: 9.07MIN: 0.3 / MAX: 9.28MIN: 0.45 / MAX: 5.03MIN: 0.45 / MAX: 7.81MIN: 0.45 / MAX: 7.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUgfedcba30060090012001500432.20431.94432.32395.661237.291239.671244.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUgfedcba91827364536.9837.0136.9840.4038.7538.6638.50MIN: 32.61 / MAX: 41.91MIN: 32.25 / MAX: 43.6MIN: 32.02 / MAX: 44.78MIN: 26.93 / MAX: 74.83MIN: 37.46 / MAX: 43.52MIN: 37.22 / MAX: 43.52MIN: 36.77 / MAX: 44.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUgfedcba30K60K90K120K150K45097.9944968.4344933.2744958.07123484.28120728.22120606.381. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUgfedcba0.07880.15760.23640.31520.3940.350.350.350.350.340.340.34MIN: 0.23 / MAX: 8.63MIN: 0.23 / MAX: 9.15MIN: 0.23 / MAX: 8.84MIN: 0.23 / MAX: 9.09MIN: 0.29 / MAX: 7.09MIN: 0.29 / MAX: 10.87MIN: 0.29 / MAX: 7.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Kripke

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6gfed50M100M150M200M250M2371757002365910002362439002409945001. (CXX) g++ options: -O3 -fopenmp -ldl

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfiggfedcba122436486055.1755.1555.0955.1727.4127.2427.35

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kgfedcba1.17072.34143.51214.68285.85354.1434.1384.1144.1075.0495.1495.2031. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kgfedcba2040608010067.8167.3967.7266.9990.4291.3290.811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kgfedcba4080120160200160.32161.85162.61163.19163.06166.38163.461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kgfedcba4080120160200161.32160.80162.05161.85161.50166.69163.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pgfedcba369121511.0210.7410.9810.9112.6212.5912.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pgfedcba306090120150118.48118.49119.31118.95143.55138.34141.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pgfedcba110220330440550528.53521.52525.17526.22431.90427.69422.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pgfedcba130260390520650586.75585.37597.01604.99516.91542.61510.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlygfedcba163248648072.0171.9671.4472.0026.1226.2426.20

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlygfedcba4080120160200183.29181.70182.56182.9966.7266.6466.42

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlygfedcba2040608010090.6390.2690.3190.0333.0333.1733.22

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlygfedcba140280420560700669.09667.87670.64670.87254.72255.30254.88

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlygfedcba50100150200250224.12223.95224.10224.1580.4180.7680.54

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crowngfedcba122436486021.5821.5921.4421.4855.4055.3954.90MIN: 21.43 / MAX: 21.89MIN: 21.45 / MAX: 21.84MIN: 21.3 / MAX: 21.78MIN: 21.32 / MAX: 21.8MIN: 53.71 / MAX: 58.99MIN: 54.02 / MAX: 57.64MIN: 53.27 / MAX: 57.28

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crowngfedcba132639526522.7722.6622.5722.5956.8156.4656.09MIN: 22.57 / MAX: 23.16MIN: 22.45 / MAX: 22.99MIN: 22.39 / MAX: 22.93MIN: 22.39 / MAX: 22.98MIN: 55.27 / MAX: 59.91MIN: 54.53 / MAX: 59.89MIN: 54.05 / MAX: 59.82

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragongfedcba132639526524.8224.7024.7324.6959.7959.9160.14MIN: 24.74 / MAX: 25MIN: 24.63 / MAX: 24.84MIN: 24.67 / MAX: 24.86MIN: 24.62 / MAX: 24.84MIN: 58.46 / MAX: 62.03MIN: 58.66 / MAX: 61.96MIN: 58.97 / MAX: 62

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objgfedcba122436486022.1922.1522.1622.2653.6953.8153.57MIN: 22.12 / MAX: 22.33MIN: 22.07 / MAX: 22.32MIN: 22.08 / MAX: 22.35MIN: 22.18 / MAX: 22.42MIN: 52.63 / MAX: 55.24MIN: 52.72 / MAX: 55.86MIN: 52.17 / MAX: 55.38

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragongfedcba153045607528.4828.3228.3128.3667.5067.2067.34MIN: 28.37 / MAX: 28.69MIN: 28.23 / MAX: 28.55MIN: 28.21 / MAX: 28.56MIN: 28.26 / MAX: 28.59MIN: 65.64 / MAX: 71.17MIN: 65.48 / MAX: 70.41MIN: 65.61 / MAX: 70.54

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objgfedcba132639526523.8823.9423.9423.8756.9356.6956.49MIN: 23.79 / MAX: 24.08MIN: 23.84 / MAX: 24.16MIN: 23.84 / MAX: 24.18MIN: 23.78 / MAX: 24.08MIN: 55.56 / MAX: 59.67MIN: 55.42 / MAX: 58.97MIN: 55.29 / MAX: 58.38

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragongfed61218243024.9624.8924.8324.85MIN: 24.9 / MAX: 25.13MIN: 24.81 / MAX: 25.06MIN: 24.76 / MAX: 24.96MIN: 24.78 / MAX: 25

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objgfed51015202522.2622.2722.2922.35MIN: 22.18 / MAX: 22.43MIN: 22.2 / MAX: 22.44MIN: 22.22 / MAX: 22.46MIN: 22.28 / MAX: 22.5

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crowngfed51015202521.8321.7721.9921.89MIN: 21.69 / MAX: 22.17MIN: 21.63 / MAX: 22.18MIN: 21.84 / MAX: 22.32MIN: 21.74 / MAX: 22.23

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragongfed71421283527.9127.8327.8327.74MIN: 27.81 / MAX: 28.17MIN: 27.73 / MAX: 28.13MIN: 27.72 / MAX: 28.1MIN: 27.64 / MAX: 27.98

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objgfed61218243023.7123.5023.5323.35MIN: 23.61 / MAX: 23.93MIN: 23.4 / MAX: 23.74MIN: 23.43 / MAX: 23.73MIN: 23.26 / MAX: 23.57

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crowngfed51015202522.4222.4422.3422.39MIN: 22.22 / MAX: 22.85MIN: 22.25 / MAX: 22.78MIN: 22.15 / MAX: 22.75MIN: 22.2 / MAX: 22.85

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlygfedcba0.41180.82361.23541.64722.0590.720.720.720.721.831.831.83

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlygfedcba0.4140.8281.2421.6562.070.720.720.720.721.821.841.84

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlygfedcba0.19580.39160.58740.78320.9790.340.340.340.340.870.860.86

Intel Open Image Denoise

Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.1620.3240.4860.6480.810.720.720.720.72

Intel Open Image Denoise

Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlygfed0.1620.3240.4860.6480.810.720.720.720.72

Intel Open Image Denoise

Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlygfed0.07650.1530.22950.3060.38250.340.340.340.34

OpenVKL

Benchmark: vklBenchmarkCPU Scalar

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalargfed4080120160200191191190191MIN: 13 / MAX: 3483MIN: 13 / MAX: 3484MIN: 13 / MAX: 3484MIN: 13 / MAX: 3471

OpenVKL

Benchmark: vklBenchmarkCPU ISPC

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCgfed110220330440550489488487487MIN: 36 / MAX: 6969MIN: 36 / MAX: 6952MIN: 36 / MAX: 6956MIN: 36 / MAX: 6949

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timegfedcba481216205.575535.573205.541075.5746915.9872015.9785015.98600

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timegfedcba481216205.565395.555815.563535.5700115.9778015.9888015.95280

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timegfedcba50100150200250151.68151.78151.51151.91214.14214.07215.10

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timegfedcba481216205.622785.614545.620405.6074714.1399014.1783014.23690

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timegfedcba481216205.477255.452275.461535.4532913.8317013.7666013.87390

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timegfedcba481216206.600856.595636.582706.5874516.5350016.4365016.34680

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32gfedcba8M16M24M32M40M352360003527100035315000352280003945300039486000394990001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57gfedcba13M26M39M52M65M528540005287900052827000526650005751900059296000594010001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240gfed0.37280.74561.11841.49121.8641.6481.6571.6541.6571. (CXX) g++ options: -O3 -fopenmp

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32gfedcba17M34M51M68M85M686780006886100068846000670540007692400077019000771810001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57gfedcba30M60M90M120M150M1048000001057400001054800001056500001185500001140100001174900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32gfedcba30M60M90M120M150M1384600001385800001386200001386000001536700001536900001538500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57gfedcba40M80M120M160M200M1907500001898800001912300001889300001945100001965900001962200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32gfedcba70M140M210M280M350M2774100002763900002777800002780300003067600003051100003075400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57gfedcba80M160M240M320M400M3578100003504500003579900003633100003669900003669300003694300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 1 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512gfedcba3M6M9M12M15M122560001268100012366000126830001422500014021000139090001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57gfedcba150M300M450M600M750M6820700006933400006929200006891500006749300006927600006997400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200gfed91827364537.9538.0238.0738.111. (CXX) g++ options: -O3 -fopenmp

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32gfedcba130M260M390M520M650M5430500005450200005451400005453600006036500006024700005942300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 2 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512gfedcba6M12M18M24M30M227270002519900025207000246270002822700027736000279010001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32gfedcba300M600M900M1200M1500M10471000001041900000104660000010471000001184800000119030000011835000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57gfedcba300M600M900M1200M1500M10334000001024600000103200000010350000001254800000121420000011921000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 4 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512gfedcba12M24M36M48M60M495560004997700050380000502580005516500055588000529110001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32gfedcba500M1000M1500M2000M2500M10562000001057100000105750000010595000002206800000221210000022077000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57gfedcba400M800M1200M1600M2000M10993000001094600000109540000010933000002010300000200190000019944000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 8 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512gfedcba20M40M60M80M100M1001700009944100097005000995940001091400001080800001098700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

easyWave

Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400gfed2040608010097.5397.9999.4298.981. (CXX) g++ options: -O3 -fopenmp

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 32gfedcba600M1200M1800M2400M3000M10657000001065300000106510000010652000002999800000299540000030058000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 57gfedcba600M1200M1800M2400M3000M11182000001120500000111780000011208000002564900000257110000025598000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512gfedcba50M100M150M200M250M1946700001945000001960400001938500002149100002161500002160800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512gfedcba90M180M270M360M450M2740700002733900002734800002737600004244000004296200004258100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512gfedcba130M260M390M520M650M2817300002830300002818300002829200006226300006109500006225600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 96 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 96 - Buffer Length: 256 - Filter Length: 512gfedcba150M300M450M600M750M2865300002859200002858800002862500007150300007181400007116400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache Hadoop

Operation: Open - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000gfedcba120K240K360K480K600K546448578035552486578035401606469484460829

Apache Hadoop

Operation: Open - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000gfedcba110K220K330K440K550K460829523560294985529101403226404858420168

Apache Hadoop

Operation: Open - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000gfedcba300K600K900K1200K1500K654022122100125100427831968399510204081126126

Apache Hadoop

Operation: Create - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000gfedcba13K26K39K52K65K60680583435861758617439374128843649

Apache Hadoop

Operation: Delete - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000gfedcba20K40K60K80K100K10395096993100604101010905807380191075

Apache Hadoop

Operation: Open - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000gfedcba300K600K900K1200K1500K1107420130378112048191248439185874173822215332

Apache Hadoop

Operation: Rename - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000gfedcba20K40K60K80K100K82237816338223782372771017304670522

Apache Hadoop

Operation: Create - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000gfedcba13K26K39K52K65K58928593825882457971350753742540733

Apache Hadoop

Operation: Create - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000gfedcba16K32K48K64K80K72706699207089772134522605211953665

Apache Hadoop

Operation: Delete - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 100000gfedcba20K40K60K80K100K1025649940498039105708734759082787566

Apache Hadoop

Operation: Delete - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000gfedcba20K40K60K80K100K110828111198113327111012901479731498932

Apache Hadoop

Operation: Rename - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 100000gfedcba20K40K60K80K100K80386794918382282102671596934875529

Apache Hadoop

Operation: Rename - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 1000000gfedcba20K40K60K80K100K84810825018404183921746387167973239

Apache Hadoop

Operation: Create - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000gfedcba15K30K45K60K75K70922705377005771296440014443746145

Apache Hadoop

Operation: Delete - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 100 - Files: 1000000gfedcba20K40K60K80K100K113895110803113225112613970318671590114

Apache Hadoop

Operation: Rename - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 100 - Files: 1000000gfedcba20K40K60K80K100K85763858158436081208668277212973078

Apache Hadoop

Operation: File Status - Threads: 50 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 100000gfedcba200K400K600K800K1000K561798709220389105632911657895862069529101

Apache Hadoop

Operation: File Status - Threads: 100 - Files: 100000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 100000gfedcba160K320K480K640K800K487805478469613497591716729927458716515464

Apache Hadoop

Operation: File Status - Threads: 50 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 50 - Files: 1000000gfedcba500K1000K1500K2000K2500K20366601795332320924181818228425219417482173913

Apache Hadoop

Operation: File Status - Threads: 100 - Files: 1000000

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: File Status - Threads: 100 - Files: 1000000gfedcba400K800K1200K1600K2000K2049180196463723562760060118939391619701886792

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesgfedcba60K120K180K240K300K197092196287195798197866270480256661248095


Phoronix Test Suite v10.8.5