deepsparse 7950x

AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG CROSSHAIR X670E HERO (0805 BIOS) and AMD Radeon RX 6800/6800 XT / 6900 on Ubuntu 22.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2301241-NE-DEEPSPARS52
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
January 24 2023
  25 Minutes
bb
January 24 2023
  25 Minutes
c
January 24 2023
  25 Minutes
d
January 24 2023
  26 Minutes
Invert Hiding All Results Option
  25 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


{ "title": "deepsparse 7950x", "last_modified": "2023-01-24 15:42:02", "description": "AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG CROSSHAIR X670E HERO (0805 BIOS) and AMD Radeon RX 6800\/6800 XT \/ 6900 on Ubuntu 22.10 via the Phoronix Test Suite.", "systems": { "a": { "identifier": "a", "hardware": { "Processor": "AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores \/ 32 Threads)", "Motherboard": "ASUS ROG CROSSHAIR X670E HERO (0805 BIOS)", "Chipset": "AMD Device 14d8", "Memory": "32GB", "Disk": "2048GB SOLIDIGM SSDPFKKW020X7 + 257GB Flash Drive", "Graphics": "AMD Radeon RX 6800\/6800 XT \/ 6900", "Audio": "AMD Navi 21\/23", "Monitor": "ASUS MG28U", "Network": "Intel I225-V + Intel Wi-Fi 6 AX210\/AX211\/AX411" }, "software": { "OS": "Ubuntu 22.10", "Kernel": "5.19.0-29-generic (x86_64)", "Desktop": "GNOME Shell 43.1", "Display Server": "X Server + Wayland", "Vulkan": "1.3.224", "Compiler": "GCC 12.2.0", "File-System": "ext4", "Screen Resolution": "3840x2160" }, "user": "phoronix", "timestamp": "2023-01-24 14:37:30", "client_version": "10.8.4", "data": { "cpu-scaling-governor": "amd-pstate schedutil (Boost: Enabled)", "cpu-microcode": "0xa601203", "kernel-extra-details": "Transparent Huge Pages: madvise", "python": "Python 3.10.7", "security": "itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy\/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected" } }, "bb": { "identifier": "bb", "hardware": { "Processor": "AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores \/ 32 Threads)", "Motherboard": "ASUS ROG CROSSHAIR X670E HERO (0805 BIOS)", "Chipset": "AMD Device 14d8", "Memory": "32GB", "Disk": "2048GB SOLIDIGM SSDPFKKW020X7 + 257GB Flash Drive", "Graphics": "AMD Radeon RX 6800\/6800 XT \/ 6900 (2575\/1000MHz)", "Audio": "AMD Navi 21\/23", "Monitor": "ASUS MG28U", "Network": "Intel I225-V + Intel Wi-Fi 6 AX210\/AX211\/AX411" }, "software": { "OS": "Ubuntu 22.10", "Kernel": "5.19.0-29-generic (x86_64)", "Desktop": "GNOME Shell 43.1", "Display Server": "X Server + Wayland", "OpenGL": "4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)", "Vulkan": "1.3.224", "Compiler": "GCC 12.2.0", "File-System": "ext4", "Screen Resolution": "3840x2160" }, "user": "phoronix", "timestamp": "2023-01-24 14:53:05", "client_version": "10.8.4", "data": { "cpu-scaling-governor": "amd-pstate schedutil (Boost: Enabled)", "cpu-microcode": "0xa601203", "kernel-extra-details": "Transparent Huge Pages: madvise", "python": "Python 3.10.7", "security": "itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy\/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected" } }, "c": { "identifier": "c", "hardware": { "Processor": "AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores \/ 32 Threads)", "Motherboard": "ASUS ROG CROSSHAIR X670E HERO (0805 BIOS)", "Chipset": "AMD Device 14d8", "Memory": "32GB", "Disk": "2048GB SOLIDIGM SSDPFKKW020X7 + 257GB Flash Drive", "Graphics": "AMD Radeon RX 6800\/6800 XT \/ 6900 (2575\/1000MHz)", "Audio": "AMD Navi 21\/23", "Monitor": "ASUS MG28U", "Network": "Intel I225-V + Intel Wi-Fi 6 AX210\/AX211\/AX411" }, "software": { "OS": "Ubuntu 22.10", "Kernel": "5.19.0-29-generic (x86_64)", "Desktop": "GNOME Shell 43.1", "Display Server": "X Server + Wayland", "OpenGL": "4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)", "Vulkan": "1.3.224", "Compiler": "GCC 12.2.0", "File-System": "ext4", "Screen Resolution": "3840x2160" }, "user": "phoronix", "timestamp": "2023-01-24 15:08:39", "client_version": "10.8.4", "data": { "cpu-scaling-governor": "amd-pstate schedutil (Boost: Enabled)", "cpu-microcode": "0xa601203", "kernel-extra-details": "Transparent Huge Pages: madvise", "python": "Python 3.10.7", "security": "itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy\/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected" } }, "d": { "identifier": "d", "hardware": { "Processor": "AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores \/ 32 Threads)", "Motherboard": "ASUS ROG CROSSHAIR X670E HERO (0805 BIOS)", "Chipset": "AMD Device 14d8", "Memory": "32GB", "Disk": "2048GB SOLIDIGM SSDPFKKW020X7 + 257GB Flash Drive", "Graphics": "AMD Radeon RX 6800\/6800 XT \/ 6900 (2575\/1000MHz)", "Audio": "AMD Navi 21\/23", "Monitor": "ASUS MG28U", "Network": "Intel I225-V + Intel Wi-Fi 6 AX210\/AX211\/AX411" }, "software": { "OS": "Ubuntu 22.10", "Kernel": "5.19.0-29-generic (x86_64)", "Desktop": "GNOME Shell 43.1", "Display Server": "X Server + Wayland", "OpenGL": "4.6 Mesa 22.2.1 (LLVM 15.0.2 DRM 3.47)", "Vulkan": "1.3.224", "Compiler": "GCC 12.2.0", "File-System": "ext4", "Screen Resolution": "3840x2160" }, "user": "phoronix", "timestamp": "2023-01-24 15:24:19", "client_version": "10.8.4", "data": { "cpu-scaling-governor": "amd-pstate schedutil (Boost: Enabled)", "cpu-microcode": "0xa601203", "kernel-extra-details": "Transparent Huge Pages: madvise", "python": "Python 3.10.7", "security": "itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy\/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected" } } }, "results": { "22074c59daa87c89fa03389084881116cf73696f": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/document_classification\/obert-base\/pytorch\/huggingface\/imdb\/base-none --scenario async", "description": "Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 12.70309999999999917008608463220298290252685546875, "test_run_times": [ 46.36999999999999744204615126363933086395263671875 ] }, "bb": { "value": 12.636900000000000687805368215776979923248291015625, "test_run_times": [ 46.6400000000000005684341886080801486968994140625 ] }, "c": { "value": 12.59179999999999921556081972084939479827880859375, "test_run_times": [ 47.340000000000003410605131648480892181396484375 ] }, "d": { "value": 12.642099999999999226929503493010997772216796875, "test_run_times": [ 46.530000000000001136868377216160297393798828125 ] } } }, "24eb4ffdc955acf95b9ce746bf1048b938bdbbb8": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/document_classification\/obert-base\/pytorch\/huggingface\/imdb\/base-none --scenario async", "description": "Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 627.436500000000023646862246096134185791015625, "test_run_times": [ 46.36999999999999744204615126363933086395263671875 ] }, "bb": { "value": 630.515599999999949432094581425189971923828125, "test_run_times": [ 46.6400000000000005684341886080801486968994140625 ] }, "c": { "value": 632.930399999999963256414048373699188232421875, "test_run_times": [ 47.340000000000003410605131648480892181396484375 ] }, "d": { "value": 630.3581000000000358340912498533725738525390625, "test_run_times": [ 46.530000000000001136868377216160297393798828125 ] } } }, "2bd6c6ec8a454181e63b8aca2ed4ac837f9b262c": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/document_classification\/obert-base\/pytorch\/huggingface\/imdb\/base-none --scenario sync", "description": "Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 11.76050000000000039790393202565610408782958984375, "test_run_times": [ 42.88000000000000255795384873636066913604736328125 ] }, "bb": { "value": 11.7185000000000005826450433232821524143218994140625, "test_run_times": [ 42.93999999999999772626324556767940521240234375 ] }, "c": { "value": 11.6940000000000008384404281969182193279266357421875, "test_run_times": [ 42.969999999999998863131622783839702606201171875 ] }, "d": { "value": 11.71940000000000026147972675971686840057373046875, "test_run_times": [ 42.75999999999999801048033987171947956085205078125 ] } } }, "24ab211c45fa94f480a6162bb7ad74f6657c154a": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/document_classification\/obert-base\/pytorch\/huggingface\/imdb\/base-none --scenario sync", "description": "Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 85.0258000000000038198777474462985992431640625, "test_run_times": [ 42.88000000000000255795384873636066913604736328125 ] }, "bb": { "value": 85.3311000000000063892002799548208713531494140625, "test_run_times": [ 42.93999999999999772626324556767940521240234375 ] }, "c": { "value": 85.51019999999999754436430521309375762939453125, "test_run_times": [ 42.969999999999998863131622783839702606201171875 ] }, "d": { "value": 85.3242000000000047066350816749036312103271484375, "test_run_times": [ 42.75999999999999801048033987171947956085205078125 ] } } }, "a481d20f291625225f2abd9a5b5986702d2b1b55": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/sentiment_analysis\/bert-base\/pytorch\/huggingface\/sst2\/12layer_pruned90-none --scenario async", "description": "Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 447.307900000000017826096154749393463134765625, "test_run_times": [ 39.71000000000000085265128291212022304534912109375 ] }, "bb": { "value": 445.886799999999993815436027944087982177734375, "test_run_times": [ 39.77000000000000312638803734444081783294677734375 ] }, "c": { "value": 446.6481999999999743522494100034236907958984375, "test_run_times": [ 39.77000000000000312638803734444081783294677734375 ] }, "d": { "value": 446.729199999999991632648743689060211181640625, "test_run_times": [ 39.75 ] } } }, "f2c297e59506afe0d052909fc3c4cf10d15ae9f7": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/sentiment_analysis\/bert-base\/pytorch\/huggingface\/sst2\/12layer_pruned90-none --scenario async", "description": "Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 17.871500000000001051603248924948275089263916015625, "test_run_times": [ 39.71000000000000085265128291212022304534912109375 ] }, "bb": { "value": 17.928599999999999425881469505839049816131591796875, "test_run_times": [ 39.77000000000000312638803734444081783294677734375 ] }, "c": { "value": 17.89789999999999992041921359486877918243408203125, "test_run_times": [ 39.77000000000000312638803734444081783294677734375 ] }, "d": { "value": 17.89470000000000027284841053187847137451171875, "test_run_times": [ 39.75 ] } } }, "84a8b55216d74b14f047b24a8622e9d204c4e719": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/sentiment_analysis\/bert-base\/pytorch\/huggingface\/sst2\/12layer_pruned90-none --scenario sync", "description": "Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 163.766199999999997771737980656325817108154296875, "test_run_times": [ 39.11999999999999744204615126363933086395263671875 ] }, "bb": { "value": 163.25389999999998735802364535629749298095703125, "test_run_times": [ 39.280000000000001136868377216160297393798828125 ] }, "c": { "value": 162.78190000000000736690708436071872711181640625, "test_run_times": [ 39.280000000000001136868377216160297393798828125 ] }, "d": { "value": 164.274599999999992405719240196049213409423828125, "test_run_times": [ 39.159999999999996589394868351519107818603515625 ] } } }, "459c1a56f63f6ba07a896216df53f7f1c7181ea5": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/sentiment_analysis\/bert-base\/pytorch\/huggingface\/sst2\/12layer_pruned90-none --scenario sync", "description": "Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 6.10189999999999965751840136363171041011810302734375, "test_run_times": [ 39.11999999999999744204615126363933086395263671875 ] }, "bb": { "value": 6.12080000000000001847411112976260483264923095703125, "test_run_times": [ 39.280000000000001136868377216160297393798828125 ] }, "c": { "value": 6.1393000000000004234834705130197107791900634765625, "test_run_times": [ 39.280000000000001136868377216160297393798828125 ] }, "d": { "value": 6.0829000000000004177991286269389092922210693359375, "test_run_times": [ 39.159999999999996589394868351519107818603515625 ] } } }, "6f3602d50f552dfef4c3845639497b9378aae70f": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/question_answering\/bert-base\/pytorch\/huggingface\/squad\/12layer_pruned90-none --scenario async", "description": "Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 101.2583000000000055251803132705390453338623046875, "test_run_times": [ 39.60000000000000142108547152020037174224853515625 ] }, "bb": { "value": 101.5712999999999937017491902224719524383544921875, "test_run_times": [ 39.969999999999998863131622783839702606201171875 ] }, "c": { "value": 100.752600000000001045918907038867473602294921875, "test_run_times": [ 39.99000000000000198951966012828052043914794921875 ] }, "d": { "value": 100.92910000000000536601874046027660369873046875, "test_run_times": [ 40.27000000000000312638803734444081783294677734375 ] } } }, "60e7cad75c86ea0bbb070c637bee280a4f5805e1": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/question_answering\/bert-base\/pytorch\/huggingface\/squad\/12layer_pruned90-none --scenario async", "description": "Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 78.9797999999999973397279973141849040985107421875, "test_run_times": [ 39.60000000000000142108547152020037174224853515625 ] }, "bb": { "value": 78.7360000000000042064129956997931003570556640625, "test_run_times": [ 39.969999999999998863131622783839702606201171875 ] }, "c": { "value": 79.376000000000004774847184307873249053955078125, "test_run_times": [ 39.99000000000000198951966012828052043914794921875 ] }, "d": { "value": 79.237200000000001409716787748038768768310546875, "test_run_times": [ 40.27000000000000312638803734444081783294677734375 ] } } }, "e035e5e6db0bd8c57f24d2e7d85ffc39aebf8f0e": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/question_answering\/bert-base\/pytorch\/huggingface\/squad\/12layer_pruned90-none --scenario sync", "description": "Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 64.8383000000000038198777474462985992431640625, "test_run_times": [ 38.61999999999999744204615126363933086395263671875 ] }, "bb": { "value": 64.1577999999999946112438919954001903533935546875, "test_run_times": [ 38.6099999999999994315658113919198513031005859375 ] }, "c": { "value": 64.3932999999999964302332955412566661834716796875, "test_run_times": [ 38.68999999999999772626324556767940521240234375 ] }, "d": { "value": 64.44360000000000354702933691442012786865234375, "test_run_times": [ 38.4500000000000028421709430404007434844970703125 ] } } }, "c835096b1ac0bc337f03e86a081be32ebbff59dd": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/question_answering\/bert-base\/pytorch\/huggingface\/squad\/12layer_pruned90-none --scenario sync", "description": "Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 15.417699999999999960209606797434389591217041015625, "test_run_times": [ 38.61999999999999744204615126363933086395263671875 ] }, "bb": { "value": 15.5802999999999993718802215880714356899261474609375, "test_run_times": [ 38.6099999999999994315658113919198513031005859375 ] }, "c": { "value": 15.5236999999999998323119143606163561344146728515625, "test_run_times": [ 38.68999999999999772626324556767940521240234375 ] }, "d": { "value": 15.51200000000000045474735088646411895751953125, "test_run_times": [ 38.4500000000000028421709430404007434844970703125 ] } } }, "ab1cb1361405bfaab23a469f722a2922b7e9c359": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/detection\/yolov5-s\/pytorch\/ultralytics\/coco\/base-none --scenario async", "description": "Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 116.65510000000000445652403868734836578369140625, "test_run_times": [ 36.3299999999999982946974341757595539093017578125 ] }, "bb": { "value": 117.610600000000005138645065017044544219970703125, "test_run_times": [ 36.32000000000000028421709430404007434844970703125 ] }, "c": { "value": 115.9903000000000048430592869408428668975830078125, "test_run_times": [ 36.21000000000000085265128291212022304534912109375 ] }, "d": { "value": 116.061700000000001864464138634502887725830078125, "test_run_times": [ 36.2000000000000028421709430404007434844970703125 ] } } }, "3b181bc5a971eeb0f13242940580cbd3a1a7ab87": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/detection\/yolov5-s\/pytorch\/ultralytics\/coco\/base-none --scenario async", "description": "Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 68.510300000000000864019966684281826019287109375, "test_run_times": [ 36.3299999999999982946974341757595539093017578125 ] }, "bb": { "value": 67.950999999999993406163412146270275115966796875, "test_run_times": [ 36.32000000000000028421709430404007434844970703125 ] }, "c": { "value": 68.9308999999999940655470709316432476043701171875, "test_run_times": [ 36.21000000000000085265128291212022304534912109375 ] }, "d": { "value": 68.8770999999999986584953148849308490753173828125, "test_run_times": [ 36.2000000000000028421709430404007434844970703125 ] } } }, "127d78802d36781936b8cb86102e79f27096841e": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/detection\/yolov5-s\/pytorch\/ultralytics\/coco\/base-none --scenario sync", "description": "Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 88.72190000000000509317032992839813232421875, "test_run_times": [ 35.969999999999998863131622783839702606201171875 ] }, "bb": { "value": 88.7951000000000050249582272954285144805908203125, "test_run_times": [ 36 ] }, "c": { "value": 89.0528000000000048430592869408428668975830078125, "test_run_times": [ 36.030000000000001136868377216160297393798828125 ] }, "d": { "value": 88.947599999999994224708643741905689239501953125, "test_run_times": [ 36.02000000000000312638803734444081783294677734375 ] } } }, "6ccd28ab1038495bcdb4ce50f0b39795445f75d5": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/detection\/yolov5-s\/pytorch\/ultralytics\/coco\/base-none --scenario sync", "description": "Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 11.2522999999999999687361196265555918216705322265625, "test_run_times": [ 35.969999999999998863131622783839702606201171875 ] }, "bb": { "value": 11.2463999999999995083044268540106713771820068359375, "test_run_times": [ 36 ] }, "c": { "value": 11.21039999999999992041921359486877918243408203125, "test_run_times": [ 36.030000000000001136868377216160297393798828125 ] }, "d": { "value": 11.2263000000000001676880856393836438655853271484375, "test_run_times": [ 36.02000000000000312638803734444081783294677734375 ] } } }, "abd89e758de17bb20e30ca2c54fd21ae06a00f0c": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/classification\/resnet_v1-50\/pytorch\/sparseml\/imagenet\/base-none --scenario async", "description": "Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 298.2712999999999965439201332628726959228515625, "test_run_times": [ 35.99000000000000198951966012828052043914794921875 ] }, "bb": { "value": 298.335100000000011277734301984310150146484375, "test_run_times": [ 35.9200000000000017053025658242404460906982421875 ] }, "c": { "value": 298.617700000000013460521586239337921142578125, "test_run_times": [ 35.840000000000003410605131648480892181396484375 ] }, "d": { "value": 297.798000000000001818989403545856475830078125, "test_run_times": [ 35.81000000000000227373675443232059478759765625 ] } } }, "1aa8e961336fa724343a5a75a7f883a445a50d27": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/classification\/resnet_v1-50\/pytorch\/sparseml\/imagenet\/base-none --scenario async", "description": "Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 26.810199999999998254907040973193943500518798828125, "test_run_times": [ 35.99000000000000198951966012828052043914794921875 ] }, "bb": { "value": 26.80369999999999919282345217652618885040283203125, "test_run_times": [ 35.9200000000000017053025658242404460906982421875 ] }, "c": { "value": 26.7760999999999995679900166578590869903564453125, "test_run_times": [ 35.840000000000003410605131648480892181396484375 ] }, "d": { "value": 26.8519000000000005456968210637569427490234375, "test_run_times": [ 35.81000000000000227373675443232059478759765625 ] } } }, "1e8a8da4e2d17103278ae3637f2d93384ff9d885": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/classification\/resnet_v1-50\/pytorch\/sparseml\/imagenet\/base-none --scenario sync", "description": "Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 204.3663999999999987267074175179004669189453125, "test_run_times": [ 35.67999999999999971578290569595992565155029296875 ] }, "bb": { "value": 204.03690000000000281943357549607753753662109375, "test_run_times": [ 36.1700000000000017053025658242404460906982421875 ] }, "c": { "value": 204.66329999999999245119397528469562530517578125, "test_run_times": [ 35.72999999999999687361196265555918216705322265625 ] }, "d": { "value": 204.43999999999999772626324556767940521240234375, "test_run_times": [ 35.75 ] } } }, "34020adc87228d6b4bcd43a93bddfd95cb54ce67": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/classification\/resnet_v1-50\/pytorch\/sparseml\/imagenet\/base-none --scenario sync", "description": "Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 4.8879999999999999005240169935859739780426025390625, "test_run_times": [ 35.67999999999999971578290569595992565155029296875 ] }, "bb": { "value": 4.89599999999999990762944435118697583675384521484375, "test_run_times": [ 36.1700000000000017053025658242404460906982421875 ] }, "c": { "value": 4.88119999999999976125764078460633754730224609375, "test_run_times": [ 35.72999999999999687361196265555918216705322265625 ] }, "d": { "value": 4.8864999999999998436805981327779591083526611328125, "test_run_times": [ 35.75 ] } } }, "41bb298a249f0a714fce2931db5a0b06fea8033b": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/text_classification\/distilbert-none\/pytorch\/huggingface\/mnli\/base-none --scenario async", "description": "Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 184.899400000000014188117347657680511474609375, "test_run_times": [ 37.77000000000000312638803734444081783294677734375 ] }, "bb": { "value": 184.41509999999999536157702095806598663330078125, "test_run_times": [ 37.6400000000000005684341886080801486968994140625 ] }, "c": { "value": 182.955900000000013960743672214448451995849609375, "test_run_times": [ 37.72999999999999687361196265555918216705322265625 ] }, "d": { "value": 182.03170000000000072759576141834259033203125, "test_run_times": [ 37.63000000000000255795384873636066913604736328125 ] } } }, "f83de7a02c0da38bd1b5466e7db79bb5b8c8f0f5": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/text_classification\/distilbert-none\/pytorch\/huggingface\/mnli\/base-none --scenario async", "description": "Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 43.2539000000000015688783605583012104034423828125, "test_run_times": [ 37.77000000000000312638803734444081783294677734375 ] }, "bb": { "value": 43.3682000000000016370904631912708282470703125, "test_run_times": [ 37.6400000000000005684341886080801486968994140625 ] }, "c": { "value": 43.71419999999999816964191268198192119598388671875, "test_run_times": [ 37.72999999999999687361196265555918216705322265625 ] }, "d": { "value": 43.93639999999999901092451182194054126739501953125, "test_run_times": [ 37.63000000000000255795384873636066913604736328125 ] } } }, "0e31409aa4bf4376544d970fabdee6c4b44048f4": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/text_classification\/distilbert-none\/pytorch\/huggingface\/mnli\/base-none --scenario sync", "description": "Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 116.736099999999993315213941968977451324462890625, "test_run_times": [ 37.1099999999999994315658113919198513031005859375 ] }, "bb": { "value": 117.53690000000000281943357549607753753662109375, "test_run_times": [ 37.25999999999999801048033987171947956085205078125 ] }, "c": { "value": 116.6075000000000017053025658242404460906982421875, "test_run_times": [ 37.07000000000000028421709430404007434844970703125 ] }, "d": { "value": 117.5340000000000060254023992456495761871337890625, "test_run_times": [ 60.469999999999998863131622783839702606201171875 ] } } }, "3295a6deb073ddc50f35da090ed90177c6249455": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/text_classification\/distilbert-none\/pytorch\/huggingface\/mnli\/base-none --scenario sync", "description": "Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 8.5625, "test_run_times": [ 37.1099999999999994315658113919198513031005859375 ] }, "bb": { "value": 8.50430000000000063664629124104976654052734375, "test_run_times": [ 37.25999999999999801048033987171947956085205078125 ] }, "c": { "value": 8.571899999999999408828443847596645355224609375, "test_run_times": [ 37.07000000000000028421709430404007434844970703125 ] }, "d": { "value": 8.504400000000000403588273911736905574798583984375, "test_run_times": [ 60.469999999999998863131622783839702606201171875 ] } } }, "287c6abb52fda1ae58924d684c1cf11459611563": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/segmentation\/yolact-darknet53\/pytorch\/dbolya\/coco\/pruned90-none --scenario async", "description": "Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 39.7599000000000017962520360015332698822021484375, "test_run_times": [ 48.57000000000000028421709430404007434844970703125 ] }, "bb": { "value": 39.6409000000000020236257114447653293609619140625, "test_run_times": [ 47.93999999999999772626324556767940521240234375 ] }, "c": { "value": 39.8652000000000015234036254696547985076904296875, "test_run_times": [ 48.1099999999999994315658113919198513031005859375 ] }, "d": { "value": 39.85289999999999821511664777062833309173583984375, "test_run_times": [ 46.5799999999999982946974341757595539093017578125 ] } } }, "9921e53989c7ce226a5596fc6d0fb4eed9da3003": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/segmentation\/yolact-darknet53\/pytorch\/dbolya\/coco\/pruned90-none --scenario async", "description": "Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 201.0063000000000101863406598567962646484375, "test_run_times": [ 48.57000000000000028421709430404007434844970703125 ] }, "bb": { "value": 201.357400000000012596501619555056095123291015625, "test_run_times": [ 47.93999999999999772626324556767940521240234375 ] }, "c": { "value": 200.482799999999997453414835035800933837890625, "test_run_times": [ 48.1099999999999994315658113919198513031005859375 ] }, "d": { "value": 200.526600000000001955413608811795711517333984375, "test_run_times": [ 46.5799999999999982946974341757595539093017578125 ] } } }, "a842dc2c08ae67334a6eacb3f5fd842cbef8b738": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/segmentation\/yolact-darknet53\/pytorch\/dbolya\/coco\/pruned90-none --scenario sync", "description": "Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 33.9089000000000027057467377744615077972412109375, "test_run_times": [ 43.50999999999999801048033987171947956085205078125 ] }, "bb": { "value": 33.89379999999999881765688769519329071044921875, "test_run_times": [ 43.5 ] }, "c": { "value": 33.89560000000000172803993336856365203857421875, "test_run_times": [ 43.22999999999999687361196265555918216705322265625 ] }, "d": { "value": 33.87140000000000128466126625426113605499267578125, "test_run_times": [ 42.96000000000000085265128291212022304534912109375 ] } } }, "ea37e692bf6e6062d0892e695b1de5ffee65d8d6": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:cv\/segmentation\/yolact-darknet53\/pytorch\/dbolya\/coco\/pruned90-none --scenario sync", "description": "Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 29.478899999999999437250153278000652790069580078125, "test_run_times": [ 43.50999999999999801048033987171947956085205078125 ] }, "bb": { "value": 29.492200000000000414956957683898508548736572265625, "test_run_times": [ 43.5 ] }, "c": { "value": 29.490500000000000824229573481716215610504150390625, "test_run_times": [ 43.22999999999999687361196265555918216705322265625 ] }, "d": { "value": 29.511900000000000687805368215776979923248291015625, "test_run_times": [ 42.96000000000000085265128291212022304534912109375 ] } } }, "2bb20c26ae53e3a326ad1ccee592b834fd6fd8cc": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/text_classification\/bert-base\/pytorch\/huggingface\/sst2\/base-none --scenario async", "description": "Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 96.05729999999999790816218592226505279541015625, "test_run_times": [ 46.42999999999999971578290569595992565155029296875 ] }, "bb": { "value": 95.5669999999999930651028989814221858978271484375, "test_run_times": [ 46.21000000000000085265128291212022304534912109375 ] }, "c": { "value": 96.2813000000000016598278307355940341949462890625, "test_run_times": [ 46.28999999999999914734871708787977695465087890625 ] }, "d": { "value": 95.6055000000000063664629124104976654052734375, "test_run_times": [ 51.0499999999999971578290569595992565155029296875 ] } } }, "c7ce3bbadd9d2c6469e9cae87d40ceb5c5e846b1": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/text_classification\/bert-base\/pytorch\/huggingface\/sst2\/base-none --scenario async", "description": "Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 83.2699999999999960209606797434389591217041015625, "test_run_times": [ 46.42999999999999971578290569595992565155029296875 ] }, "bb": { "value": 83.6972999999999984765963745303452014923095703125, "test_run_times": [ 46.21000000000000085265128291212022304534912109375 ] }, "c": { "value": 83.076400000000006684786058031022548675537109375, "test_run_times": [ 46.28999999999999914734871708787977695465087890625 ] }, "d": { "value": 83.6638999999999981582732289098203182220458984375, "test_run_times": [ 51.0499999999999971578290569595992565155029296875 ] } } }, "8fcc3f8acbfe72ec5de99c7cde6186277ea32b2f": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/text_classification\/bert-base\/pytorch\/huggingface\/sst2\/base-none --scenario sync", "description": "Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 61.73120000000000118234311230480670928955078125, "test_run_times": [ 42.99000000000000198951966012828052043914794921875 ] }, "bb": { "value": 61.13539999999999707824827055446803569793701171875, "test_run_times": [ 43.159999999999996589394868351519107818603515625 ] }, "c": { "value": 61.8960000000000007958078640513122081756591796875, "test_run_times": [ 43.07000000000000028421709430404007434844970703125 ] }, "d": { "value": 61.92739999999999866986399865709245204925537109375, "test_run_times": [ 42.97999999999999687361196265555918216705322265625 ] } } }, "843d1824dd19b04697694a69bcc51903129def31": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/text_classification\/bert-base\/pytorch\/huggingface\/sst2\/base-none --scenario sync", "description": "Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 16.1955999999999988858689903281629085540771484375, "test_run_times": [ 42.99000000000000198951966012828052043914794921875 ] }, "bb": { "value": 16.35340000000000060254023992456495761871337890625, "test_run_times": [ 43.159999999999996589394868351519107818603515625 ] }, "c": { "value": 16.15240000000000009094947017729282379150390625, "test_run_times": [ 43.07000000000000028421709430404007434844970703125 ] }, "d": { "value": 16.14410000000000167119651450775563716888427734375, "test_run_times": [ 42.97999999999999687361196265555918216705322265625 ] } } }, "a7d6c1f86735d917b5d3565f99bc627cf81a0061": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/token_classification\/bert-base\/pytorch\/huggingface\/conll2003\/base-none --scenario async", "description": "Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 12.589600000000000790123522165231406688690185546875, "test_run_times": [ 46.340000000000003410605131648480892181396484375 ] }, "bb": { "value": 12.6313999999999992951416061259806156158447265625, "test_run_times": [ 47.2999999999999971578290569595992565155029296875 ] }, "c": { "value": 12.6256000000000003780087354243732988834381103515625, "test_run_times": [ 47.02000000000000312638803734444081783294677734375 ] }, "d": { "value": 12.592999999999999971578290569595992565155029296875, "test_run_times": [ 58.49000000000000198951966012828052043914794921875 ] } } }, "43f27b9e8703f8523e8dc046919f673c48dcfc6f": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/token_classification\/bert-base\/pytorch\/huggingface\/conll2003\/base-none --scenario async", "description": "Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 633.182199999999966166797094047069549560546875, "test_run_times": [ 46.340000000000003410605131648480892181396484375 ] }, "bb": { "value": 630.7994999999999663486960344016551971435546875, "test_run_times": [ 47.2999999999999971578290569595992565155029296875 ] }, "c": { "value": 631.3046000000000503860064782202243804931640625, "test_run_times": [ 47.02000000000000312638803734444081783294677734375 ] }, "d": { "value": 632.156600000000025829649530351161956787109375, "test_run_times": [ 58.49000000000000198951966012828052043914794921875 ] } } }, "8de5c818c20c8daebb2c296f64b37c6fae6fa130": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/token_classification\/bert-base\/pytorch\/huggingface\/conll2003\/base-none --scenario sync", "description": "Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream", "scale": "items\/sec", "proportion": "HIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 11.716100000000000846966941026039421558380126953125, "test_run_times": [ 42.77000000000000312638803734444081783294677734375 ] }, "bb": { "value": 11.711600000000000676436684443615376949310302734375, "test_run_times": [ 42.96000000000000085265128291212022304534912109375 ] }, "c": { "value": 11.739200000000000301270119962282478809356689453125, "test_run_times": [ 42.8299999999999982946974341757595539093017578125 ] }, "d": { "value": 11.733599999999999141664375201798975467681884765625, "test_run_times": [ 42.92999999999999971578290569595992565155029296875 ] } } }, "8392d9f869eda14ce6bc7997ca053ef7c630dfc6": { "identifier": "pts\/deepsparse-1.3.2", "title": "Neural Magic DeepSparse", "app_version": "1.3.2", "arguments": "zoo:nlp\/token_classification\/bert-base\/pytorch\/huggingface\/conll2003\/base-none --scenario sync", "description": "Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream", "scale": "ms\/batch", "proportion": "LIB", "display_format": "BAR_GRAPH", "results": { "a": { "value": 85.3482000000000056161297834478318691253662109375, "test_run_times": [ 42.77000000000000312638803734444081783294677734375 ] }, "bb": { "value": 85.3815000000000026147972675971686840057373046875, "test_run_times": [ 42.96000000000000085265128291212022304534912109375 ] }, "c": { "value": 85.180800000000004956746124662458896636962890625, "test_run_times": [ 42.8299999999999982946974341757595539093017578125 ] }, "d": { "value": 85.221800000000001773514668457210063934326171875, "test_run_times": [ 42.92999999999999971578290569595992565155029296875 ] } } } } }