9654 new

2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and llvmpipe on Red Hat Enterprise Linux 9.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2303114-NE-9654NEW5019
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 5 Tests
Creator Workloads 8 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
HPC - High Performance Computing 4 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Multi-Core 12 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Server 3 Tests
Server CPU Tests 4 Tests
Video Encoding 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 09 2023
  2 Hours, 22 Minutes
b
March 10 2023
  2 Hours, 23 Minutes
c
March 10 2023
  2 Hours, 22 Minutes
no smt a
March 10 2023
  2 Hours, 24 Minutes
no smt b
March 10 2023
  2 Hours, 24 Minutes
smt a
March 11 2023
  2 Hours, 30 Minutes
smt b
March 11 2023
  2 Hours, 30 Minutes
smt c
March 11 2023
  2 Hours, 30 Minutes
smt d
March 11 2023
  2 Hours, 30 Minutes
Invert Hiding All Results Option
  2 Hours, 26 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


9654 newProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionOpenGLabcno smt ano smt bsmt asmt bsmt csmt dAMD EPYC 9654 96-Core @ 2.40GHz (96 Cores / 192 Threads)AMD Titanite_4G (RTI1004D BIOS)AMD Device 14a4768GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeRed Hat Enterprise Linux 9.15.14.0-162.6.1.el9_1.x86_64 (x86_64)GNOME Shell 40.10X Server 1.20.11GCC 11.3.1 20220421xfs1600x12002 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores)1520GBllvmpipe4.5 Mesa 22.1.5 (LLVM 14.0.6 256 bits)1024x7682 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Python Details- Python 3.9.14Security Details- a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - no smt a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - no smt b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt c: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt d: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcno smt ano smt bsmt asmt bsmt csmt dResult OverviewPhoronix Test Suite100%127%153%180%oneDNNOpenVINOGROMACSMemcachedTimed Linux Kernel CompilationOpenVKLStress-NGClickHouseTimed FFmpeg CompilationNeural Magic DeepSparseVP9 libvpx Encodinguvg266KvazaarRocksDBZstd Compression

9654 newstress-ng: MMAPonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUstress-ng: Context Switchingstress-ng: NUMAonednn: Deconvolution Batch shapes_1d - f32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUstress-ng: Pthreadrocksdb: Rand Readstress-ng: Matrix Mathstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Vector Mathstress-ng: Mutexstress-ng: Cryptoonednn: IP Shapes 3D - f32 - CPUstress-ng: SENDFILEstress-ng: Function Callonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUstress-ng: Atomicdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamstress-ng: Glibc Qsort Data Sortingdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUstress-ng: Glibc C String Functionsdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUstress-ng: Hashopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Person Detection FP32 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Mallocopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Read While Writingdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUstress-ng: MEMFDstress-ng: Memory Copyingstress-ng: Forkingstress-ng: Futexgromacs: MPI CPU - water_GMX50_barerocksdb: Read Rand Write Randonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUmemcached: 1:5stress-ng: System V Message Passingonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUkvazaar: Bosphorus 4K - Slowmemcached: 1:10onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUkvazaar: Bosphorus 4K - Mediumuvg266: Bosphorus 4K - Slowopenvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUmemcached: 1:100stress-ng: Semaphoresbuild-linux-kernel: defconfigstress-ng: Pollopenvkl: vklBenchmark Scalardeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamuvg266: Bosphorus 4K - Mediumdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamuvg266: Bosphorus 1080p - Super Fastuvg266: Bosphorus 1080p - Ultra Fastcompress-zstd: 12 - Compression Speedrocksdb: Update Randrocksdb: Seq Filldeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamuvg266: Bosphorus 1080p - Very Fastclickhouse: 100M Rows Hits Dataset, Second Runrocksdb: Rand Fillclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachedeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamclickhouse: 100M Rows Hits Dataset, Third Runvpxenc: Speed 5 - Bosphorus 4Kopenvino: Age Gender Recognition Retail 0013 FP16 - CPUuvg266: Bosphorus 4K - Ultra Fastopenvkl: vklBenchmark ISPCcompress-zstd: 8 - Compression Speedcompress-zstd: 3 - Compression Speedbuild-ffmpeg: Time To Compilekvazaar: Bosphorus 1080p - Very Fastuvg266: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Super Fastuvg266: Bosphorus 4K - Super Fastdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcompress-zstd: 3, Long Mode - Compression Speedkvazaar: Bosphorus 1080p - Mediumdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamrocksdb: Rand Fill Syncdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastkvazaar: Bosphorus 4K - Super Fastkvazaar: Bosphorus 4K - Ultra Fastopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamvpxenc: Speed 0 - Bosphorus 4Kuvg266: Bosphorus 1080p - Slowuvg266: Bosphorus 1080p - Mediumopenvino: Weld Porosity Detection FP16-INT8 - CPUcompress-zstd: 8, Long Mode - Compression Speeddeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 0 - Bosphorus 1080pcompress-zstd: 19, Long Mode - Compression Speeddeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastercompress-zstd: 19 - Compression Speeddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Socket Activitycompress-zstd: 19 - Decompression Speedcompress-zstd: 12 - Decompression Speedvvenc: Bosphorus 4K - Fastcompress-zstd: 19, Long Mode - Decompression Speedvvenc: Bosphorus 4K - Fastercompress-zstd: 3 - Decompression Speedcompress-zstd: 8 - Decompression Speedvpxenc: Speed 5 - Bosphorus 1080pdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Decompression Speedembree: Pathtracer - Crownabcno smt ano smt bsmt asmt bsmt csmt d1663.088.74576674.0882.01581671.69675.9153.1149718941003.97483.97.3574616.32941.47109397.78468231434382305.6977.51205134.87556875.5959479783.64203073.151.564611950323.96621015.340.263997174.7242.95811122.3942.98930.55527716257100.251009.84860.29021895493695.76962.149.729494.22499.321.87777907.4894914.179.76440.8533911.245909.691149.7446619.7531314.836127.978.115908.891701.7401.7766312709034.0528.471671.4674495.1792961851566.5640.415446540.4188.710.3570170.534450.304972518.4620106.5158156.362794694.3710.56929264581.170040.6933675750.748.344143044.2110473084.091.8314740.636861088.890.31140541.429.37112346.360.711588487886018128283.2925.76812653709.645565.1812192.880633.0399.149310.08115.3018188.537811.36287.9631238.68240.98330.854438454225616.582860.2522234.95625.91533927600.709.5962104.1178625.1617.360.6570.5610891233.83033.912.569296.9368.82139.92307.4969.3328.623334.9289892.7143.3134.826228.70773766011115.76021116.42680.9310.7381.983.451.09197.10185.071547.4647.6881.1691.3710.1938.5108.677514.699.22319.981577.336330.03919.1152.1377119.20678873.581472.61704.85.821383.712.3361514.81669.329.3730.592412.441677.91536.61664.759.1957671.6541.9234670.202668.9524.2785816313126.86498.677.2824315.543085.53109356.78466888888382304.0467.21217072.27556797.2760031543.97203147.151.562571913590.77621003.920.263373223.2942.83481132.3542.8970.55196616168655.171012.34940.29138718955118.195.48962.6349.729496.95500.831.72975903.2724915.429.75439.0373912.114917.077149.8207619.9442316.290728.238.135894.271685.26401.2053314418461.0228.421675.9674353.4186203521573.74960.578466541.0388.610.3568750.5130680.305731507.7420340.458664.972805836.5210.60928919621.166960.6890645720.98.384162812.4710471889.971.8417240.866792746.340.31272141.3929.29111378.390.7124954876951.3618100474.3625.74912661687.465575.1633193.530533.1398.617510.1365.498181.812311.187289.3355239.92240.9133254555654539616.674559.9222234.68627.20534681610.799.5793104.2908628.3717.370.6670.6810751239.3309512.434290.5968.79139.07303.996928.713734.8189909.4144.2135.096228.48663730021118.43831116.289981.61305.5280.6884.771.09196.54145.085947.35327.7181.4191.4710.1910.8109.112914.679.29319.964877.30529.84519.1151.6003119.37998851.651467.81716.75.8091378.212.3911516.6165129.530.460812.3991673.71537.51668.923.83017669.1461.88686671.233667.954.397216862185.54478.057.3811615.13174.04109609.15468069792382328.597.04217304.19556833.559929401.4203095.441.567421891202.37621041.930.26292183.3342.91521125.8642.74330.55685116537965.141013.73570.28543218961773.1895.59962.1549.779485.65500.31.6386906.1394910.139.76438.4372915.084911.762149.6905625.5512316.628227.898.135898.791704.02400.0427313768771.2628.331679.0374486.0883163791571.82810.417188535.7889.460.3531750.5469910.305599507.9720297.9164299.622794473.7510.58729100231.169190.6926675724.818.374155273.4610475486.391.8226440.766839975.870.31228941.4729.41112186.250.7085634852421.6718088584.6725.6812676101.415495.1696193.311933.198.592210.13845.5007181.724311.160589.547237.96238.77330.154357254456516.518560.4884237.44621.88536551614.248.8004113.5057636.1817.460.6671.1310661241.13049.412.465291.0369.04140.1301.469.9228.569834.9937916.9143.8135.118828.46873569221122.05841117.732380.31309.6484.5582.71.09197.66015.057347.29427.781.191.2910.11926.9109.246714.769.3320.050476.618130.08119.1151.3899119.70268876.11470.51715.65.808138412.3991517.41661.929.4630.491112.4411675.41540.94520.486.19257930.2572.04003913.201973.3184.8788347222624.4920.519.252926.017979.0568076.031209611055932248.1840.94328297.04920216.2763581395.67435615.352.196453284433.2829106.630.316665400.0397.39961978.897.44420.66435926755146.692204.75330.31677927408413.1208.82438.29109.0820836.1229.462.022861148.2610556.934.54941.24431151.251147.34308.42941291.9733649.30257.183.9512119.66833.99813.9832456657338.8457.58828.67127833.5276438313107.29570.2916871045.8145.860.3428360.5560690.160361303.5611342.6943094.963746361.5219.17520798040.6642950.403599784.094.92444886.827402514.741.6504147.134244908.830.25484547.9734.59160545.410.4623863192061.6813141129.6817.60610458275.246477.5856131.758638.3872.129813.85596.0118166.272514.463869.1063181.67186.24279.946201846570020.738548.1849196.25666.43478629635.489.8899101.0095665.0014.480.5757.7810981227.52804.210.597269.6257.13155.56288.4159.8331.922931.3187955.6159.6331.914631.327350673955.1631956.008870.02278.6275.576.051.01191.26055.226242.8057.0487.1598.219.18859.9100.241814.289.39304.207573.199328.68919.8145.3548116.15328968.281483.117285.891395.812.4771500.41669.729.6230.421312.3571684.81542.53591.717.98861901.0441.93126903.358939.5474.7930244683906.6819.789.317516.017979.8267451.31213540299925984.2455.84326819.72920642.3465500297.73437065.752.123793282827.5829445.380.293564395.9597.34411963.7597.38380.67057428009942.562203.15730.34049827422305.53210.14438.99109.0420849.75228.012.028331124.7510549.244.54938.90841119.171139.63308.30331292.8038647.759657.193.9512128.38834.17817.6313456651508.0457.49830.04127770.3979135683086.35910.2912291048.7445.730.3472310.6290260.164845308.6710949.945685.283802292.618.83720638310.6513010.4003259758.744.912460416.337372780.721.7005747.434220522.490.25579647.9334.68162294.470.4611743216133.9113192391.8517.58610393402.016526.4174155.735738.6571.887813.90396.065164.812614.417869.3266216.15209.85278.245222846404420.422348.9257183.75649.52468210622.739.9395100.509662.0114.50.5857.2311081234.22865.910.662268.7758.09159.79280.1858.3332.177831.071032.3161.429.754433.6012344040955.2386955.343869.94271.4574.0374.371.01175.31995.701542.83937.1688.1396.679.18892100.419813.729.76303.737773.208728.5219.8145.3627115.63698924.11483.81727.85.8951397.612.311511.51667.629.7130.640312.3741682.81538.88360.7615.42131877.605663269.053133.6511.432612895047.7324.8220.26956.027953.2174978.511225662852946032.8847.58487359.391291689.33136339636.91466292.123.391514329963.261414423.650.591738184.3497.29162564.4897.02491.2004935687958.372230.30990.62177641966139.67208.13439.41108.9320610.23230.213.388331912.1610544.034.54929.02461830.411888.1314.61861306.6677658.211157.293.9512117.16833.15817.7536634718750.4157.72827.07148316.88153172503114.68480.4514721049.1645.710.6744550.9753570.279638464.715914.136266.022333781.3918.81817876820.978590.5385399746.084.92252025310103952.132.7080764.634530948.620.4074365.5645.56173926.920.6725854383314.6520047519.0517.37615359471.737647.5225132.865547.5677.882712.83227.1865139.097313.63773.2924178.49216.15254.141319941416821.032347.506183.99524.04423027500.138.1914121.9232534.8213.90.5357.9613181023.92795.110.381250.7159.3132.67296.7259.0632.295630.95721051.4140.7831.759231.4805388052972.5163970.220275.91302.5576.6477.960.97188.58265.300442.95157.2480.1789.49.28853.8103.088814.089.78303.327273.30618.8145.3966117.12338748.991495.21723.71393.51515.61664.329.5430.74621682.51539.59017.1219.5493034.627.42273249.283222.937.3014712221058.3424.7720.5726.037949.05180407.591231168093946660.8844.38487987.321291704.57135855921.28466609.223.494534329824.381413555.440.600199186.6497.35282520.3896.9061.1813535996976.132245.80360.62090741989522.68208.19439.5108.8920610.62230.063.596431890.2810549.154.54927.6791937.651894.07314.24521306.5839658.34157.313.9612109.64832.7822.1552634995429.6657.67827.03147582.87138306893114.42930.4502611050.3745.660.6756640.9717040.291087564.0515430.1836020.162396325.9319.01517614160.9735550.5517839680.444.952507331.298586858.512.761466.224392934.750.40807465.6946.25175754.350.6727684357453.7619927313.5217.01715341564.168106.8844145.173646.6277.409612.91067.3054136.832414.635668.2924220.79179.56259.141151341370821.739845.9642209.55527.88417882516.098.3506119.5964538.6314.270.5357.5112431122.42513.410.259274.6357.86136.33267.0459.232.241531.00931038.9138.3331.780631.4593401295975.504969.212274.11295.1876.7770.97188.41475.305342.65366.9779.9589.79.28852.7103.190213.579.87304.078273.349619.5145.2745116.48138750.611483.51726.31392.41519.1167129.630.75081676.51540.47633.1618.65943011.219.349813153.223185.6412.01312328933.6524.7921.03015.997993.1877834.391234570512952461.4842.55490300.481295970.28138939958.5468159.213.601944351871.121422505.510.593299184.2496.90772517.2696.99751.2014634602827.772253.14110.63499241972765.93210.19437.51109.3620684.29227.933.365351888.2110603.844.52937.9921926.151925.68315.33181305.7137662.258557.713.9312168.05826.67818.6037639757070.5258.05821.93148047.13139489143132.38380.447821058.5745.310.6761240.995710.275917413.3415077.6934917.392067499.5819.04516139130.9728630.5451189773.264.92517750.478609357.682.7023565.494549313.420.39437765.5946.34173620.310.6767984394194.4919866440.8617.05315228320.077937.355135.890746.5770.141814.24866.1057163.711114.201170.3828178.78218.28249.841998541428220.488448.772192.25524.75416006495.8510.385296.1737530.4715.380.5357.3312351005.82604.610.407243.3657.44135.8288.0958.3433.987929.41631046.6140.8931.927731.3146394852971.8988971.667773.57303.4378.7276.140.96189.40545.277342.55216.9279.6389.169.25860.8102.099913.979.82303.254973.340719.2144.4661116.96898747.831475.21732.313891516.2166429.5430.57841683.71540.97273.1911.75733349.368.260783171.153172.1912.157112728401.4524.7120.972667990.4791735.751231916197951700.1740.56489998.751300693.93137579874.564680063.505634351920.421425621.440.602831182.897.89082516.0996.71691.233436111781.822255.4220.62713341965286.06209.24437.68109.2320679.68228.942.974271957.7710587.34.52931.82231883.761912.74316.03281301.1811660.213257.623.9312172.92828.01816.7562640853365.5458822.5149647.68130980973116.09090.4575541057.7445.340.6721150.9750910.247825394.4913370.3334685.852077119.7619.07717529040.9739840.630639767.994.912527084.8712418451.712.6730366.084569476.860.38922365.4946.25172228.340.6803954403267.4219842969.2817.07915403597.247817.064141.470546.2175.52413.23317.149139.824115.12666.083218.91224.31256.242051441404721.359346.7815181.83515.11415428500.1183407519.3656106.6518527.8015.110.5358.7612381024.82528.710.373270.7657.52135.56256.4558.6632.175831.07271059136.4230.249433.0514404463974.985966.760673.24303.6375.578.850.97183.9495.434142.47987.158088.919.26879.6102.764613.689.81301.919273.619619.6144.9144117.22198864.781479.11731.21391.71513.51664.229.6630.72471677.61543.6OpenBenchmarking.org

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MMAPsmt bsmt asmt csmt dno smt ano smt bcba2K4K6K8K10K9017.128360.767633.167273.194520.483591.711668.921664.751663.081. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUcno smt ano smt babsmt dsmt asmt csmt b5101520253.830176.192577.988618.745769.1957011.7573015.4210018.6594019.54900MIN: 2.72MIN: 3.65MIN: 3.88MIN: 3.65MIN: 4.13MIN: 8.17MIN: 10.4MIN: 11.23MIN: 10.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUcbano smt bno smt asmt csmt bsmt asmt d7001400210028003500669.15671.65674.09901.04930.263011.213034.623187.003349.36MIN: 661.71MIN: 664.19MIN: 667.01MIN: 864.28MIN: 898.89MIN: 2853.26MIN: 2821.2MIN: 3034.82MIN: 3325.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUcbno smt bano smt asmt bsmt asmt dsmt c36912151.886861.923401.931262.015812.040037.422707.605668.260789.34981MIN: 1.68MIN: 1.72MIN: 1.77MIN: 1.81MIN: 1.78MIN: 6.44MIN: 6.54MIN: 7.09MIN: 7.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUbcano smt bno smt asmt csmt dsmt bsmt a7001400210028003500670.20671.23671.69903.36913.203153.223171.153249.283269.05MIN: 663.54MIN: 662.9MIN: 664.46MIN: 874.93MIN: 884.92MIN: 3059.88MIN: 3148.29MIN: 3226.78MIN: 3243.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUcbano smt bno smt asmt asmt dsmt csmt b7001400210028003500667.95668.95675.92939.55973.323133.653172.193185.643222.93MIN: 660.93MIN: 662.29MIN: 668.99MIN: 904.78MIN: 937.3MIN: 3037.06MIN: 3155.85MIN: 2949.58MIN: 2927.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUabcno smt bno smt asmt bsmt asmt csmt d36912153.114974.278584.397204.793024.878837.3014711.4326012.0130012.15710MIN: 2.47MIN: 3.37MIN: 3.21MIN: 3.4MIN: 3.53MIN: 5.77MIN: 7.59MIN: 8.02MIN: 8.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Context Switchingno smt ano smt bacbsmt asmt dsmt csmt b10M20M30M40M50M47222624.4944683906.6818941003.9716862185.5416313126.8612895047.7312728401.4512328933.6512221058.341. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: NUMAbacsmt asmt csmt bsmt dno smt ano smt b110220330440550498.67483.90478.0524.8224.7924.7724.7120.5119.781. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUbacno smt ano smt bsmt asmt bsmt dsmt c5101520257.282437.357467.381169.252929.3175120.2695020.5720020.9726021.03010MIN: 6.58MIN: 4.85MIN: 6.77MIN: 7.79MIN: 8.07MIN: 17.68MIN: 18.25MIN: 18.21MIN: 17.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUsmt csmt dno smt ano smt bsmt asmt bcba481216205.996.006.016.016.026.0315.1015.5416.30MIN: 5.13 / MAX: 31.29MIN: 5.21 / MAX: 25.51MIN: 5.2 / MAX: 37.8MIN: 5.02 / MAX: 36.88MIN: 5.27 / MAX: 25.12MIN: 5.17 / MAX: 38.35MIN: 6.94 / MAX: 60.59MIN: 8.28 / MAX: 57.72MIN: 7.91 / MAX: 51.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUsmt csmt dno smt bno smt asmt asmt bcba2K4K6K8K10K7993.187990.477979.827979.057953.217949.053174.043085.532941.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pthreadsmt bcabsmt dsmt csmt ano smt ano smt b40K80K120K160K200K180407.59109609.15109397.78109356.7891735.7577834.3974978.5168076.0367451.301. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Readsmt csmt dsmt bsmt ano smt bno smt aacb300M600M900M1200M1500M1234570512123191619712311680931225662852121354029912096110554682314344680697924668888881. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Matrix Mathsmt csmt dsmt bsmt ano smt ano smt bcab200K400K600K800K1000K952461.48951700.17946660.88946032.88932248.18925984.24382328.50382305.69382304.041. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Cachecabno smt bsmt asmt bsmt cno smt asmt d2040608010097.0477.5167.2155.8447.5844.3842.5540.9440.561. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Stresssmt csmt dsmt bsmt ano smt ano smt bcba110K220K330K440K550K490300.48489998.75487987.32487359.39328297.04326819.72217304.19217072.27205134.871. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Vector Mathsmt dsmt csmt bsmt ano smt bno smt aacb300K600K900K1200K1500K1300693.931295970.281291704.571291689.33920642.34920216.27556875.59556833.50556797.271. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mutexsmt csmt dsmt asmt bno smt bno smt abca30M60M90M120M150M138939958.50137579874.56136339636.91135855921.2865500297.7363581395.6760031543.9759929401.4059479783.641. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Cryptosmt csmt dsmt bsmt ano smt bno smt abca100K200K300K400K500K468159.21468006.00466609.22466292.12437065.75435615.35203147.15203095.44203073.151. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUbacno smt bno smt asmt asmt bsmt dsmt c0.81041.62082.43123.24164.0521.562571.564611.567422.123792.196453.391513.494533.505633.60194MIN: 1.41MIN: 1.41MIN: 1.39MIN: 1.9MIN: 2MIN: 2.93MIN: 3.11MIN: 3.19MIN: 3.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: SENDFILEsmt dsmt csmt asmt bno smt ano smt babc900K1800K2700K3600K4500K4351920.424351871.124329963.264329824.383284433.203282827.501950323.961913590.771891202.371. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Function Callsmt dsmt csmt asmt bno smt bno smt acab300K600K900K1200K1500K1425621.441422505.511414423.651413555.44829445.38829106.63621041.93621015.34621003.921. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUcbano smt bno smt asmt asmt csmt bsmt d0.13560.27120.40680.54240.6780.2629200.2633730.2639970.2935640.3166650.5917380.5932990.6001990.602831MIN: 0.2MIN: 0.2MIN: 0.18MIN: 0.22MIN: 0.24MIN: 0.43MIN: 0.39MIN: 0.38MIN: 0.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Atomicno smt ano smt bbsmt bsmt asmt ccsmt da90180270360450400.03395.95223.29186.64184.34184.24183.33182.80174.721. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamsmt dno smt asmt bno smt bsmt asmt cacb2040608010097.8997.4097.3597.3497.2996.9142.9642.9242.83

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc Qsort Data Sortingsmt asmt bsmt csmt dno smt ano smt bbca60012001800240030002564.482520.382517.262516.091978.801963.751132.351125.861122.391. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamno smt ano smt bsmt asmt csmt bsmt dabc2040608010097.4497.3897.0297.0096.9196.7242.9942.9042.74

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUbacno smt ano smt bsmt bsmt asmt csmt d0.27750.5550.83251.111.38750.5519660.5552770.5568510.6643590.6705741.1813501.2004901.2014601.233400MIN: 0.49MIN: 0.53MIN: 0.53MIN: 0.56MIN: 0.55MIN: 1.08MIN: 1.04MIN: 1.08MIN: 1.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc C String Functionssmt dsmt bsmt asmt cno smt bno smt acab8M16M24M32M40M36111781.8235996976.1335687958.3734602827.7728009942.5626755146.6916537965.1416257100.2516168655.171. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt ano smt bcba50010001500200025002255.422253.142245.802230.312204.752203.161013.741012.351009.85

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUcabno smt ano smt bsmt bsmt asmt dsmt c0.14290.28580.42870.57160.71450.2854320.2902000.2913870.3167790.3404980.6209070.6217760.6271330.634992MIN: 0.24MIN: 0.25MIN: 0.23MIN: 0.25MIN: 0.28MIN: 0.41MIN: 0.45MIN: 0.4MIN: 0.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Hashsmt bsmt csmt asmt dno smt bno smt acba9M18M27M36M45M41989522.6841972765.9341966139.6741965286.0627422305.5327408413.1018961773.1818955118.1018954936.001. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUsmt cno smt bsmt dno smt asmt bsmt aacb50100150200250210.19210.14209.24208.82208.19208.1395.7695.5995.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUsmt csmt dno smt ano smt bsmt asmt bacb2004006008001000437.51437.68438.29438.99439.41439.50962.10962.15962.63MIN: 400.05 / MAX: 473.91MIN: 394.89 / MAX: 478.58MIN: 416.93 / MAX: 496.86MIN: 427.57 / MAX: 484.31MIN: 410.81 / MAX: 465.4MIN: 424.22 / MAX: 477.66MIN: 879.24 / MAX: 1018.81MIN: 888.7 / MAX: 1017.92MIN: 893.43 / MAX: 1015.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUsmt csmt dno smt ano smt bsmt asmt bcba20406080100109.36109.23109.08109.04108.93108.8949.7749.7249.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUno smt bno smt asmt csmt dsmt bsmt abac4K8K12K16K20K20849.7520836.1020684.2920679.6820610.6220610.239496.959494.229485.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUsmt cno smt bsmt dno smt asmt bsmt aacb110220330440550227.93228.01228.94229.46230.06230.21499.32500.30500.83MIN: 210.46 / MAX: 252.96MIN: 212.99 / MAX: 267.17MIN: 214.92 / MAX: 248.29MIN: 217.93 / MAX: 265.81MIN: 210.58 / MAX: 251.42MIN: 211.56 / MAX: 253.58MIN: 264.54 / MAX: 537.26MIN: 410.04 / MAX: 531.93MIN: 418.76 / MAX: 546.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUcbano smt ano smt bsmt dsmt csmt asmt b0.80921.61842.42763.23684.0461.638601.729751.877772.022862.028332.974273.365353.388333.59643MIN: 1.2MIN: 1.27MIN: 1.23MIN: 1.56MIN: 1.43MIN: 2.3MIN: 2.7MIN: 2.76MIN: 2.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUbcano smt bno smt asmt csmt bsmt asmt d400800120016002000903.27906.14907.491124.751148.261888.211890.281912.161957.77MIN: 894.76MIN: 898.41MIN: 897.95MIN: 1091.36MIN: 1108.42MIN: 1860.29MIN: 1864.29MIN: 1891.03MIN: 1929.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUsmt csmt dno smt ano smt bsmt bsmt abac2K4K6K8K10K10603.8410587.3010556.9310549.2410549.1510544.034915.424914.174910.131. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUsmt csmt dno smt ano smt bsmt asmt bbac36912154.524.524.544.544.544.549.759.769.76MIN: 4.12 / MAX: 33.17MIN: 4.03 / MAX: 45.52MIN: 4.12 / MAX: 27.91MIN: 4.09 / MAX: 55.78MIN: 4.07 / MAX: 35.16MIN: 4.11 / MAX: 42.14MIN: 5.25 / MAX: 35.53MIN: 5.03 / MAX: 28.1MIN: 4.98 / MAX: 36.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamno smt ano smt bsmt csmt dsmt asmt babc2004006008001000941.24938.91937.99931.82929.02927.68440.85439.04438.44

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUabcno smt bno smt asmt asmt dsmt csmt b400800120016002000911.25912.11915.081119.171151.251830.411883.761926.151937.65MIN: 903.01MIN: 902.43MIN: 906.26MIN: 1086.08MIN: 1070.35MIN: 1807.97MIN: 1850.75MIN: 1901.09MIN: 1908.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUacbno smt bno smt asmt asmt bsmt dsmt c400800120016002000909.69911.76917.081139.631147.341888.101894.071912.741925.68MIN: 901.1MIN: 901.88MIN: 909.14MIN: 1100.83MIN: 1109.85MIN: 1865.76MIN: 1870.66MIN: 1890.93MIN: 1901.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamsmt dsmt csmt asmt bno smt ano smt bbac70140210280350316.03315.33314.62314.25308.43308.30149.82149.74149.69

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamsmt asmt bsmt csmt dno smt bno smt acba300600900120015001306.671306.581305.711301.181292.801291.97625.55619.94619.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamsmt csmt dsmt bsmt ano smt ano smt bcba140280420560700662.26660.21658.34658.21649.30647.76316.63316.29314.84

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUsmt csmt dsmt bsmt ano smt bno smt abac132639526557.7157.6257.3157.2957.1957.1828.2327.9727.891. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUsmt csmt dno smt ano smt bsmt asmt babc2468103.933.933.953.953.953.968.118.138.13MIN: 3.61 / MAX: 42.82MIN: 3.61 / MAX: 23.62MIN: 3.61 / MAX: 38MIN: 3.68 / MAX: 42.53MIN: 3.61 / MAX: 34.38MIN: 3.66 / MAX: 32.83MIN: 5.39 / MAX: 69.87MIN: 5.35 / MAX: 55.84MIN: 3.83 / MAX: 59.871. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUsmt dsmt cno smt bno smt asmt asmt bacb3K6K9K12K15K12172.9212168.0512128.3812119.6612117.1612109.645908.895898.795894.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUsmt csmt dsmt bsmt ano smt ano smt bbac400800120016002000826.67828.01832.70833.15833.99834.171685.261701.701704.02MIN: 724.1 / MAX: 1006.75MIN: 716.19 / MAX: 1018.34MIN: 723.65 / MAX: 1031.69MIN: 725.84 / MAX: 1017.38MIN: 723.91 / MAX: 1011.94MIN: 732.01 / MAX: 1006.46MIN: 891.16 / MAX: 1979.37MIN: 1395.71 / MAX: 2063.97MIN: 828.99 / MAX: 1969.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamsmt bsmt csmt ano smt bsmt dno smt aabc2004006008001000822.16818.60817.75817.63816.76813.98401.78401.21400.04

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mallocsmt dsmt csmt bsmt ano smt ano smt bbca140M280M420M560M700M640853365.54639757070.52634995429.66634718750.41456657338.84456651508.04314418461.02313768771.26312709034.051. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUsmt csmt dsmt asmt bno smt ano smt babc132639526558.0558.0057.7257.6757.5857.4928.4728.4228.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUsmt csmt dsmt bsmt ano smt ano smt babc400800120016002000821.93822.50827.03827.07828.67830.041671.461675.961679.03MIN: 725.25 / MAX: 1010.56MIN: 717.65 / MAX: 997.43MIN: 723.77 / MAX: 1003.25MIN: 724.84 / MAX: 1037.91MIN: 730.29 / MAX: 1036.1MIN: 722.6 / MAX: 1015.67MIN: 924.15 / MAX: 1977.46MIN: 1231.52 / MAX: 1967.75MIN: 865.58 / MAX: 1995.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUsmt dsmt asmt csmt bno smt ano smt bacb30K60K90K120K150K149647.68148316.88148047.13147582.87127833.52127770.3974495.1774486.0874353.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While Writingsmt asmt csmt bsmt dabcno smt bno smt a3M6M9M12M15M15317250139489141383068913098097929618586203528316379791356876438311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamsmt csmt dsmt asmt bno smt ano smt bbca70014002100280035003132.383116.093114.683114.433107.303086.361573.751571.831566.56

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUno smt bno smt aacsmt csmt bsmt asmt db0.13020.26040.39060.52080.6510.2912290.2916870.4154460.4171880.4478200.4502610.4514720.4575540.578466MIN: 0.27MIN: 0.27MIN: 0.4MIN: 0.4MIN: 0.37MIN: 0.37MIN: 0.4MIN: 0.34MIN: 0.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUsmt csmt dsmt bsmt ano smt bno smt abac20040060080010001058.571057.741050.371049.161048.741045.81541.03540.41535.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUsmt csmt dsmt bsmt ano smt bno smt abac2040608010045.3145.3445.6645.7145.7345.8688.6188.7189.46MIN: 39.35 / MAX: 73.19MIN: 39.4 / MAX: 73.83MIN: 38.82 / MAX: 75.65MIN: 39.78 / MAX: 73.33MIN: 39.68 / MAX: 86.83MIN: 39.29 / MAX: 91.13MIN: 47.57 / MAX: 124.67MIN: 44.2 / MAX: 132.86MIN: 42.16 / MAX: 123.41. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUno smt ano smt bcbasmt dsmt asmt bsmt c0.15210.30420.45630.60840.76050.3428360.3472310.3531750.3568750.3570170.6721150.6744550.6756640.676124MIN: 0.28MIN: 0.3MIN: 0.31MIN: 0.31MIN: 0.31MIN: 0.49MIN: 0.49MIN: 0.55MIN: 0.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUbacno smt ano smt bsmt bsmt dsmt asmt c0.2240.4480.6720.8961.120.5130680.5344500.5469910.5560690.6290260.9717040.9750910.9753570.995710MIN: 0.47MIN: 0.49MIN: 0.5MIN: 0.46MIN: 0.51MIN: 0.82MIN: 0.91MIN: 0.92MIN: 0.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUno smt ano smt bsmt dsmt csmt asmt bacb0.06880.13760.20640.27520.3440.1603610.1648450.2478250.2759170.2796380.2910870.3049720.3055990.305731MIN: 0.15MIN: 0.15MIN: 0.23MIN: 0.23MIN: 0.23MIN: 0.23MIN: 0.28MIN: 0.28MIN: 0.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MEMFDsmt bacbsmt asmt csmt dno smt bno smt a120240360480600564.05518.46507.97507.74464.70413.34394.49308.67303.561. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Memory Copyingbcasmt asmt bsmt csmt dno smt ano smt b4K8K12K16K20K20340.4020297.9120106.5115914.1015430.1815077.6913370.3311342.6910949.901. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Forkingcbano smt bno smt asmt asmt bsmt csmt d14K28K42K56K70K64299.6258664.9758156.3645685.2843094.9636266.0236020.1634917.3934685.851. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Futexno smt bno smt abacsmt bsmt asmt dsmt c800K1600K2400K3200K4000K3802292.603746361.522805836.522794694.372794473.752396325.932333781.392077119.762067499.581. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareno smt asmt dsmt csmt bno smt bsmt abca51015202519.1819.0819.0519.0218.8418.8210.6110.5910.571. (CXX) g++ options: -O3

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write Randomacbno smt ano smt bsmt asmt bsmt dsmt c600K1200K1800K2400K3000K2926458291002328919622079804206383117876821761416175290416139131. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUno smt bno smt asmt csmt bsmt dsmt abca0.26330.52660.78991.05321.31650.6513010.6642950.9728630.9735550.9739840.9785901.1669601.1691901.170040MIN: 0.62MIN: 0.63MIN: 0.92MIN: 0.92MIN: 0.92MIN: 0.93MIN: 1.07MIN: 1.07MIN: 1.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUno smt bno smt asmt asmt csmt bsmt dbca0.1560.3120.4680.6240.780.4003250.4035900.5385390.5451180.5517830.6306300.6890640.6926670.693367MIN: 0.38MIN: 0.38MIN: 0.48MIN: 0.45MIN: 0.49MIN: 0.48MIN: 0.64MIN: 0.65MIN: 0.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUno smt asmt csmt dno smt bsmt asmt bacb2K4K6K8K10K9784.099773.269767.999758.749746.089680.445750.745724.815720.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUno smt asmt cno smt bsmt dsmt asmt bacb2468104.904.904.914.914.924.958.348.378.38MIN: 4.46 / MAX: 56.52MIN: 4.5 / MAX: 29.95MIN: 4.49 / MAX: 34.72MIN: 4.51 / MAX: 27.27MIN: 4.52 / MAX: 34.8MIN: 4.54 / MAX: 24.23MIN: 6.61 / MAX: 55.54MIN: 6.83 / MAX: 50.24MIN: 6.4 / MAX: 32.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:5bcasmt dsmt asmt csmt bno smt bno smt a900K1800K2700K3600K4500K4162812.474155273.464143044.212527084.872520253.002517750.472507331.292460416.332444886.821. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: System V Message Passingsmt dcabsmt asmt csmt bno smt ano smt b3M6M9M12M15M12418451.7110475486.3910473084.0910471889.9710103952.138609357.688586858.517402514.747372780.721. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUno smt ano smt bcabsmt dsmt csmt asmt b0.62131.24261.86392.48523.10651.650411.700571.822641.831471.841722.673032.702352.708072.76140MIN: 1.41MIN: 1.5MIN: 1.71MIN: 1.73MIN: 1.76MIN: 2.09MIN: 2.27MIN: 2.09MIN: 2.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Slowsmt bsmt dsmt csmt ano smt bno smt abca153045607566.2266.0865.4964.6347.4347.1340.8640.7640.631. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10acbsmt dsmt csmt asmt bno smt ano smt b1.5M3M4.5M6M7.5M6861088.896839975.876792746.344569476.864549313.424530948.624392934.754244908.834220522.491. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUno smt ano smt bacbsmt dsmt csmt asmt b0.09180.18360.27540.36720.4590.2548450.2557960.3114050.3122890.3127210.3892230.3943770.4074300.408074MIN: 0.18MIN: 0.18MIN: 0.3MIN: 0.28MIN: 0.28MIN: 0.29MIN: 0.29MIN: 0.27MIN: 0.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Mediumsmt bsmt csmt asmt dno smt ano smt bcab153045607565.6965.5965.5665.4947.9747.9341.4741.4041.391. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowsmt csmt dsmt bsmt ano smt bno smt acab112233445546.3446.2546.2545.5634.6834.5929.4129.3729.29

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUsmt bsmt asmt csmt dno smt bno smt aacb40K80K120K160K200K175754.35173926.92173620.31172228.34162294.47160545.41112346.36112186.25111378.391. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUno smt bno smt asmt asmt bsmt csmt dcab0.16030.32060.48090.64120.80150.4611740.4623860.6725850.6727680.6767980.6803950.7085630.7115880.712495MIN: 0.43MIN: 0.42MIN: 0.52MIN: 0.53MIN: 0.53MIN: 0.53MIN: 0.67MIN: 0.68MIN: 0.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100abcsmt dsmt csmt asmt bno smt bno smt a1000K2000K3000K4000K5000K4878860.004876951.364852421.674403267.424394194.494383314.654357453.763216133.913192061.681. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Semaphoressmt asmt bsmt csmt dabcno smt bno smt a4M8M12M16M20M20047519.0519927313.5219866440.8619842969.2818128283.2918100474.3618088584.6713192391.8513141129.681. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigsmt bsmt csmt dsmt ano smt bno smt acba61218243017.0217.0517.0817.3817.5917.6125.6825.7525.77

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pollsmt dsmt asmt bsmt ccbano smt ano smt b3M6M9M12M15M15403597.2415359471.7315341564.1615228320.0712676101.4112661687.4612653709.6410458275.2410393402.011. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark Scalarsmt bsmt csmt dsmt ano smt bno smt abac2004006008001000810793781764652647557556549MIN: 138 / MAX: 3650MIN: 139 / MAX: 3583MIN: 139 / MAX: 3776MIN: 139 / MAX: 3808MIN: 102 / MAX: 3990MIN: 101 / MAX: 4024MIN: 61 / MAX: 6019MIN: 61 / MAX: 5994MIN: 60 / MAX: 5475

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambcano smt bsmt bsmt dsmt csmt ano smt a2468105.16335.16965.18126.41746.88447.06407.35507.52257.5856

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambcano smt bsmt bsmt dsmt csmt ano smt a4080120160200193.53193.31192.88155.74145.17141.47135.89132.87131.76

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumsmt asmt bsmt csmt dno smt bno smt abca112233445547.5646.6246.5746.2138.6538.3833.1333.1033.03

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabcsmt asmt bsmt dno smt ano smt bsmt c2040608010099.1598.6298.5977.8877.4175.5272.1371.8970.14

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabcsmt asmt bsmt dno smt ano smt bsmt c4812162010.0810.1410.1412.8312.9113.2313.8613.9014.25

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt csmt dsmt asmt b2468105.30185.49805.50076.01186.06506.10577.14907.18657.3054

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt csmt dsmt asmt b4080120160200188.54181.81181.72166.27164.81163.71139.82139.10136.83

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcbasmt asmt cno smt bno smt asmt bsmt d4812162011.1611.1911.3613.6414.2014.4214.4614.6415.13

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcbasmt asmt cno smt bno smt asmt bsmt d2040608010089.5589.3487.9673.2970.3869.3369.1168.2966.08

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastbacsmt bsmt dno smt bno smt asmt csmt a50100150200250239.92238.68237.96220.79218.91216.15181.67178.78178.49

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastabcsmt dsmt csmt ano smt bno smt asmt b50100150200250240.98240.91238.77224.31218.28216.15209.85186.24179.56

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedbacno smt ano smt bsmt bsmt dsmt asmt c70140210280350332.0330.8330.1279.9278.2259.1256.2254.1249.81. (CC) gcc options: -O3 -pthread -lz

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update Randombacno smt ano smt bsmt dsmt csmt asmt b120K240K360K480K600K5455565443845435724620184522284205144199854131994115131. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential Fillbcano smt ano smt bsmt csmt asmt dsmt b120K240K360K480K600K5453965445655422564657004640444142824141684140474137081. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcabno smt bsmt cno smt asmt asmt dsmt b51015202516.5216.5816.6720.4220.4920.7421.0321.3621.74

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcabno smt bsmt cno smt asmt asmt dsmt b142842567060.4960.2559.9248.9348.7748.1847.5146.7845.96

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastcabsmt bno smt asmt csmt ano smt bsmt d50100150200250237.44234.95234.68209.55196.25192.25183.99183.75181.83

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runno smt ano smt bbacsmt bsmt csmt asmt d140280420560700666.43649.52627.20625.91621.88527.88524.75524.04515.11MIN: 85.11 / MAX: 6666.67MIN: 86.46 / MAX: 6000MIN: 58.54 / MAX: 6000MIN: 56.13 / MAX: 7500MIN: 58.54 / MAX: 6666.67MIN: 90.63 / MAX: 6666.67MIN: 90.23 / MAX: 6000MIN: 90.5 / MAX: 6666.67MIN: 88.5 / MAX: 6000

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fillcbano smt ano smt bsmt asmt bsmt csmt d110K220K330K440K550K5365515346815339274786294682104230274178824160064154281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cacheno smt ano smt bcbasmt bsmt asmt dsmt c140280420560700635.48622.73614.24610.79600.70516.09500.13500.12495.85MIN: 85.35 / MAX: 6000MIN: 83.22 / MAX: 5454.55MIN: 57.14 / MAX: 6666.67MIN: 58.14 / MAX: 6666.67MIN: 57.75 / MAX: 6000MIN: 89.55 / MAX: 6000MIN: 65.15 / MAX: 6000MIN: 69.61 / MAX: 6000MIN: 51.06 / MAX: 6000

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamsmt asmt bcsmt dbano smt ano smt bsmt c36912158.19148.35068.80049.36569.57939.59629.88999.939510.3852

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamsmt asmt bcsmt dbano smt ano smt bsmt c306090120150121.92119.60113.51106.65104.29104.12101.01100.5196.17

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runno smt ano smt bcbasmt bsmt asmt csmt d140280420560700665.00662.01636.18628.37625.16538.63534.82530.47527.80MIN: 85.84 / MAX: 6000MIN: 86.33 / MAX: 6000MIN: 58.03 / MAX: 6666.67MIN: 58.37 / MAX: 6666.67MIN: 57.69 / MAX: 6000MIN: 89.82 / MAX: 6000MIN: 91.32 / MAX: 6000MIN: 92.02 / MAX: 5454.55MIN: 81.52 / MAX: 6000

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4Kcbasmt csmt dno smt bno smt asmt bsmt a4812162017.4617.3717.3615.3815.1114.5014.4814.2713.901. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUsmt asmt bsmt csmt dno smt ano smt babc0.14850.2970.44550.5940.74250.530.530.530.530.570.580.650.660.66MIN: 0.5 / MAX: 24.2MIN: 0.5 / MAX: 9.61MIN: 0.5 / MAX: 9.15MIN: 0.5 / MAX: 8.86MIN: 0.5 / MAX: 13.02MIN: 0.5 / MAX: 12.8MIN: 0.32 / MAX: 20.68MIN: 0.32 / MAX: 19.99MIN: 0.31 / MAX: 22.671. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastcbasmt dsmt ano smt asmt bsmt cno smt b163248648071.1370.6870.5658.7657.9657.7857.5157.3357.23

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCsmt asmt bsmt dsmt cno smt bno smt aabc30060090012001500131812431238123511081098108910751066MIN: 391 / MAX: 3855MIN: 390 / MAX: 3501MIN: 392 / MAX: 3738MIN: 392 / MAX: 4332MIN: 297 / MAX: 4208MIN: 297 / MAX: 4284MIN: 179 / MAX: 7194MIN: 179 / MAX: 6442MIN: 180 / MAX: 6427

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedcbno smt bano smt asmt bsmt dsmt asmt c300600900120015001241.11239.31234.21233.81227.51122.41024.81023.91005.81. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedbcano smt bno smt asmt asmt csmt dsmt b70014002100280035003095.03049.43033.92865.92804.22795.12604.62528.72513.41. (CC) gcc options: -O3 -pthread -lz

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilesmt bsmt dsmt asmt cno smt ano smt bbca369121510.2610.3710.3810.4110.6010.6612.4312.4712.57

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very Fastacbsmt bsmt dno smt ano smt bsmt asmt c60120180240300296.93291.03290.59274.63270.76269.62268.77250.71243.361. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastcabsmt ano smt bsmt bsmt dsmt cno smt a153045607569.0468.8268.7959.3058.0957.8657.5257.4457.13

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Slowno smt bno smt acabsmt bsmt csmt dsmt a4080120160200159.79155.56140.10139.92139.07136.33135.80135.56132.671. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super Fastabcsmt ano smt asmt cno smt bsmt bsmt d70140210280350307.49303.99301.40296.72288.41288.09280.18267.04256.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastcabno smt asmt bsmt asmt dsmt cno smt b163248648069.9269.3369.0059.8359.2059.0658.6658.3458.33

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcabno smt asmt dno smt bsmt bsmt asmt c81624324028.5728.6228.7131.9232.1832.1832.2432.3033.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcabno smt asmt dno smt bsmt bsmt asmt c81624324034.9934.9334.8231.3231.0731.0731.0130.9629.42

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedsmt dsmt asmt csmt bno smt bno smt acba20040060080010001059.01051.41046.61038.91032.3955.6916.9909.4892.71. (CC) gcc options: -O3 -pthread -lz

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Mediumno smt bno smt abcasmt csmt asmt bsmt d4080120160200161.40159.63144.21143.81143.31140.89140.78138.33136.421. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcbasmt cno smt asmt bsmt asmt dno smt b81624324035.1235.1034.8331.9331.9131.7831.7630.2529.75

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcbasmt cno smt asmt bsmt asmt dno smt b81624324028.4728.4928.7131.3131.3331.4631.4833.0533.60

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill Syncsmt dsmt bsmt csmt aabcno smt ano smt b90K180K270K360K450K4044634012953948523880523766013730023569223506733440401. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamno smt ano smt bsmt csmt asmt dsmt babc2004006008001000955.16955.24971.90972.52974.99975.501115.761118.441122.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamno smt bno smt asmt dsmt bsmt asmt cbac2004006008001000955.34956.01966.76969.21970.22971.671116.291116.431117.73

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very Fastbacsmt asmt bsmt csmt dno smt ano smt b2040608010081.6180.9080.3175.9174.1173.5773.2470.0269.941. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra Fastacbsmt dsmt csmt asmt bno smt ano smt b70140210280350310.73309.64305.52303.63303.43302.55295.18278.62271.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super Fastcabsmt csmt bsmt asmt dno smt ano smt b2040608010084.5581.9080.6878.7276.7076.6475.5075.5074.031. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra Fastbacsmt dsmt asmt bsmt cno smt ano smt b2040608010084.7783.4582.7078.8577.9677.0076.1476.0574.371. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUsmt csmt asmt bsmt dno smt ano smt babc0.24530.49060.73590.98121.22650.960.970.970.971.011.011.091.091.09MIN: 0.87 / MAX: 9.72MIN: 0.87 / MAX: 9.78MIN: 0.87 / MAX: 10.96MIN: 0.86 / MAX: 13.31MIN: 0.86 / MAX: 8.97MIN: 0.86 / MAX: 8.72MIN: 0.49 / MAX: 19.01MIN: 0.48 / MAX: 25.82MIN: 0.49 / MAX: 22.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamcabno smt asmt csmt asmt bsmt dno smt b4080120160200197.66197.10196.54191.26189.41188.58188.41183.95175.32

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamcabno smt asmt csmt asmt bsmt dno smt b1.28282.56563.84845.13126.4145.05735.07155.08595.22625.27735.30045.30535.43415.7015

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bno smt ano smt bsmt acba112233445542.4842.5542.6542.8142.8442.9547.2947.3547.46

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4Kbcasmt ano smt bsmt dno smt asmt bsmt c2468107.717.707.687.247.167.157.046.976.921. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowno smt bno smt abacsmt asmt dsmt bsmt c2040608010088.1387.1581.4181.1681.1080.1780.0079.9579.63

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumno smt ano smt bbacsmt bsmt asmt csmt d2040608010098.2196.6791.4791.3791.2989.7089.4089.1688.91

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUno smt ano smt bsmt csmt dsmt asmt babc36912159.189.189.259.269.289.2810.1010.1010.11MIN: 8.17 / MAX: 31.1MIN: 8.21 / MAX: 24.31MIN: 8.12 / MAX: 19.04MIN: 8.12 / MAX: 30.29MIN: 8.1 / MAX: 22.87MIN: 8.13 / MAX: 24.28MIN: 5.46 / MAX: 35.41MIN: 5.47 / MAX: 28.16MIN: 5.43 / MAX: 36.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedacbno smt bsmt dsmt cno smt asmt asmt b2004006008001000938.5926.9910.8892.0879.6860.8859.9853.8852.71. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamno smt ano smt bsmt csmt dsmt asmt babc20406080100100.24100.42102.10102.76103.09103.19108.68109.11109.25

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pcabno smt asmt asmt cno smt bsmt dsmt b4812162014.7614.6914.6714.2814.0813.9713.7213.6813.571. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedsmt bsmt csmt dsmt ano smt bno smt acba36912159.879.829.819.789.769.399.309.299.221. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamsmt dsmt csmt ano smt bsmt bno smt abac70140210280350301.92303.25303.33303.74304.08304.21319.96319.98320.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamno smt ano smt bsmt asmt csmt bsmt dcba2040608010073.2073.2173.3173.3473.3573.6276.6277.3177.34

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastercabno smt ano smt b71421283530.0830.0429.8528.6928.521. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 1080p - Video Preset: Faster

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedno smt bno smt asmt dsmt bsmt ccbasmt a51015202519.819.819.619.519.219.119.119.118.81. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamsmt csmt dsmt bno smt ano smt bsmt acba306090120150144.47144.91145.27145.35145.36145.40151.39151.60152.14

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamno smt bno smt asmt bsmt csmt asmt dabc306090120150115.64116.15116.48116.97117.12117.22119.21119.38119.70

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Socket Activityno smt ano smt bcasmt dbsmt bsmt asmt c2K4K6K8K10K8968.288924.108876.108873.588864.788851.658750.618748.998747.831. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedsmt ano smt bsmt bno smt asmt dsmt cacb300600900120015001495.21483.81483.51483.11479.11475.21472.61470.51467.81. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedsmt csmt dno smt ano smt bsmt bsmt abca4008001200160020001732.31731.21728.01727.81726.31723.71716.71715.61704.81. (CC) gcc options: -O3 -pthread -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastno smt bno smt aabc1.32642.65283.97925.30566.6325.8955.8905.8205.8095.8081. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 4K - Video Preset: Fast

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedno smt bno smt asmt asmt bsmt dsmt ccab300600900120015001397.61395.81393.51392.41391.71389.01384.01383.71378.21. (CC) gcc options: -O3 -pthread -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterno smt acbano smt b369121512.4812.4012.3912.3412.311. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 4K - Video Preset: Faster

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedsmt bcbsmt csmt aasmt dno smt bno smt a300600900120015001519.11517.41516.61516.21515.61514.81513.51511.51500.41. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedsmt bno smt aano smt bsmt asmt dsmt ccb4008001200160020001671.01669.71669.31667.61664.31664.21664.01661.91651.01. (CC) gcc options: -O3 -pthread -lz

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pno smt bsmt dno smt asmt bsmt csmt abca71421283529.7129.6629.6229.6029.5429.5429.5029.4629.371. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamno smt abcsmt cano smt bsmt dsmt asmt b71421283530.4230.4630.4930.5830.5930.6430.7230.7530.75

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastcabno smt bno smt a369121512.4412.4412.4012.3712.361. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 1080p - Video Preset: Fast

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedno smt asmt cno smt bsmt aasmt dsmt bcb4008001200160020001684.81683.71682.81682.51677.91677.61676.51675.41673.71. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedsmt dno smt asmt ccsmt bsmt ano smt bba300600900120015001543.61542.51540.91540.91540.41539.51538.81537.51536.61. (CC) gcc options: -O3 -pthread -lz

Stress-NG

Test: x86_64 RdRand

a: The test run did not produce a result. E: stress-ng: error: [1836509] No stress workers invoked (one or more were unsupported)

b: The test run did not produce a result. E: stress-ng: error: [3544257] No stress workers invoked (one or more were unsupported)

c: The test run did not produce a result. E: stress-ng: error: [1223131] No stress workers invoked (one or more were unsupported)

no smt a: The test run did not produce a result. E: stress-ng: error: [4177065] No stress workers invoked (one or more were unsupported)

no smt b: The test run did not produce a result. E: stress-ng: error: [19683] No stress workers invoked (one or more were unsupported)

smt a: The test run did not produce a result. E: stress-ng: error: [55524] No stress workers invoked (one or more were unsupported)

smt b: The test run did not produce a result. E: stress-ng: error: [3903578] No stress workers invoked (one or more were unsupported)

smt c: The test run did not produce a result. E: stress-ng: error: [4007271] No stress workers invoked (one or more were unsupported)

smt d: The test run did not produce a result. E: stress-ng: error: [381360] No stress workers invoked (one or more were unsupported)

Test: IO_uring

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

no smt a: The test run did not produce a result.

no smt b: The test run did not produce a result.

smt a: The test run did not produce a result.

smt b: The test run did not produce a result.

smt c: The test run did not produce a result.

smt d: The test run did not produce a result.

Test: Zlib

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

no smt a: The test run did not produce a result.

no smt b: The test run did not produce a result.

smt a: The test run did not produce a result.

smt b: The test run did not produce a result.

smt c: The test run did not produce a result.

smt d: The test run did not produce a result.

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

Build: allmodconfig

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

no smt a: The test quit with a non-zero exit status.

no smt b: The test quit with a non-zero exit status.

smt a: The test quit with a non-zero exit status.

smt b: The test quit with a non-zero exit status.

smt c: The test quit with a non-zero exit status.

smt d: The test quit with a non-zero exit status.

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer ISPC - Model: Asian Dragon

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Asian Dragon Obj

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Asian Dragon

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer ISPC - Model: Crown

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Crown

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

169 Results Shown

Stress-NG
oneDNN:
  IP Shapes 1D - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 1D - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 1D - u8s8f32 - CPU
Stress-NG:
  Context Switching
  NUMA
oneDNN
OpenVINO:
  Vehicle Detection FP16 - CPU:
    ms
    FPS
Stress-NG
RocksDB
Stress-NG:
  Matrix Math
  CPU Cache
  CPU Stress
  Vector Math
  Mutex
  Crypto
oneDNN
Stress-NG:
  SENDFILE
  Function Call
oneDNN
Stress-NG
Neural Magic DeepSparse
Stress-NG
Neural Magic DeepSparse
oneDNN
Stress-NG
Neural Magic DeepSparse
oneDNN
Stress-NG
OpenVINO:
  Face Detection FP16-INT8 - CPU
  Face Detection FP16 - CPU
  Face Detection FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
  Face Detection FP16-INT8 - CPU
oneDNN:
  IP Shapes 3D - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
Neural Magic DeepSparse
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
OpenVINO:
  Person Detection FP32 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Person Detection FP32 - CPU
Neural Magic DeepSparse
Stress-NG
OpenVINO:
  Person Detection FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
RocksDB
Neural Magic DeepSparse
oneDNN
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
Stress-NG:
  MEMFD
  Memory Copying
  Forking
  Futex
GROMACS
RocksDB
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
Memcached
Stress-NG
oneDNN
Kvazaar
Memcached
oneDNN
Kvazaar
uvg266
OpenVINO
oneDNN
Memcached
Stress-NG
Timed Linux Kernel Compilation
Stress-NG
OpenVKL
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266:
  Bosphorus 1080p - Super Fast
  Bosphorus 1080p - Ultra Fast
Zstd Compression
RocksDB:
  Update Rand
  Seq Fill
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266
ClickHouse
RocksDB
ClickHouse
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
ClickHouse
VP9 libvpx Encoding
OpenVINO
uvg266
OpenVKL
Zstd Compression:
  8 - Compression Speed
  3 - Compression Speed
Timed FFmpeg Compilation
Kvazaar
uvg266
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Super Fast
uvg266
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Zstd Compression
Kvazaar
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
RocksDB
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Kvazaar:
  Bosphorus 4K - Very Fast
  Bosphorus 1080p - Ultra Fast
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
OpenVINO
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
VP9 libvpx Encoding
uvg266:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
OpenVINO
Zstd Compression
Neural Magic DeepSparse
VP9 libvpx Encoding
Zstd Compression
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
VVenC
Zstd Compression
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
Stress-NG
Zstd Compression:
  19 - Decompression Speed
  12 - Decompression Speed
VVenC
Zstd Compression
VVenC
Zstd Compression:
  3 - Decompression Speed
  8 - Decompression Speed
VP9 libvpx Encoding
Neural Magic DeepSparse
VVenC
Zstd Compression:
  8, Long Mode - Decompression Speed
  3, Long Mode - Decompression Speed