9654 new

2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and llvmpipe on Red Hat Enterprise Linux 9.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2303114-NE-9654NEW5019
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 5 Tests
Creator Workloads 8 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
HPC - High Performance Computing 4 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Multi-Core 12 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Server 3 Tests
Server CPU Tests 4 Tests
Video Encoding 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 09 2023
  2 Hours, 22 Minutes
b
March 10 2023
  2 Hours, 23 Minutes
c
March 10 2023
  2 Hours, 22 Minutes
no smt a
March 10 2023
  2 Hours, 24 Minutes
no smt b
March 10 2023
  2 Hours, 24 Minutes
smt a
March 11 2023
  2 Hours, 30 Minutes
smt b
March 11 2023
  2 Hours, 30 Minutes
smt c
March 11 2023
  2 Hours, 30 Minutes
smt d
March 11 2023
  2 Hours, 30 Minutes
Invert Hiding All Results Option
  2 Hours, 26 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


9654 newProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionOpenGLabcno smt ano smt bsmt asmt bsmt csmt dAMD EPYC 9654 96-Core @ 2.40GHz (96 Cores / 192 Threads)AMD Titanite_4G (RTI1004D BIOS)AMD Device 14a4768GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeRed Hat Enterprise Linux 9.15.14.0-162.6.1.el9_1.x86_64 (x86_64)GNOME Shell 40.10X Server 1.20.11GCC 11.3.1 20220421xfs1600x12002 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores)1520GBllvmpipe4.5 Mesa 22.1.5 (LLVM 14.0.6 256 bits)1024x7682 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Python Details- Python 3.9.14Security Details- a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - no smt a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - no smt b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt c: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt d: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcno smt ano smt bsmt asmt bsmt csmt dResult OverviewPhoronix Test Suite100%127%153%180%oneDNNOpenVINOGROMACSMemcachedTimed Linux Kernel CompilationOpenVKLStress-NGClickHouseTimed FFmpeg CompilationNeural Magic DeepSparseVP9 libvpx Encodinguvg266KvazaarRocksDBZstd Compression

9654 newstress-ng: MMAPonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUstress-ng: Context Switchingstress-ng: NUMAonednn: Deconvolution Batch shapes_1d - f32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUstress-ng: Pthreadrocksdb: Rand Readstress-ng: Matrix Mathstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Vector Mathstress-ng: Mutexstress-ng: Cryptoonednn: IP Shapes 3D - f32 - CPUstress-ng: SENDFILEstress-ng: Function Callonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUstress-ng: Atomicdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamstress-ng: Glibc Qsort Data Sortingdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUstress-ng: Glibc C String Functionsdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUstress-ng: Hashopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Person Detection FP32 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Mallocopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Read While Writingdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUstress-ng: MEMFDstress-ng: Memory Copyingstress-ng: Forkingstress-ng: Futexgromacs: MPI CPU - water_GMX50_barerocksdb: Read Rand Write Randonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUmemcached: 1:5stress-ng: System V Message Passingonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUkvazaar: Bosphorus 4K - Slowmemcached: 1:10onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUkvazaar: Bosphorus 4K - Mediumuvg266: Bosphorus 4K - Slowopenvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUmemcached: 1:100stress-ng: Semaphoresbuild-linux-kernel: defconfigstress-ng: Pollopenvkl: vklBenchmark Scalardeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamuvg266: Bosphorus 4K - Mediumdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamuvg266: Bosphorus 1080p - Super Fastuvg266: Bosphorus 1080p - Ultra Fastcompress-zstd: 12 - Compression Speedrocksdb: Update Randrocksdb: Seq Filldeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamuvg266: Bosphorus 1080p - Very Fastclickhouse: 100M Rows Hits Dataset, Second Runrocksdb: Rand Fillclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachedeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamclickhouse: 100M Rows Hits Dataset, Third Runvpxenc: Speed 5 - Bosphorus 4Kopenvino: Age Gender Recognition Retail 0013 FP16 - CPUuvg266: Bosphorus 4K - Ultra Fastopenvkl: vklBenchmark ISPCcompress-zstd: 8 - Compression Speedcompress-zstd: 3 - Compression Speedbuild-ffmpeg: Time To Compilekvazaar: Bosphorus 1080p - Very Fastuvg266: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Super Fastuvg266: Bosphorus 4K - Super Fastdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcompress-zstd: 3, Long Mode - Compression Speedkvazaar: Bosphorus 1080p - Mediumdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamrocksdb: Rand Fill Syncdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastkvazaar: Bosphorus 4K - Super Fastkvazaar: Bosphorus 4K - Ultra Fastopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamvpxenc: Speed 0 - Bosphorus 4Kuvg266: Bosphorus 1080p - Slowuvg266: Bosphorus 1080p - Mediumopenvino: Weld Porosity Detection FP16-INT8 - CPUcompress-zstd: 8, Long Mode - Compression Speeddeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 0 - Bosphorus 1080pcompress-zstd: 19, Long Mode - Compression Speeddeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastercompress-zstd: 19 - Compression Speeddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Socket Activitycompress-zstd: 19 - Decompression Speedcompress-zstd: 12 - Decompression Speedvvenc: Bosphorus 4K - Fastcompress-zstd: 19, Long Mode - Decompression Speedvvenc: Bosphorus 4K - Fastercompress-zstd: 3 - Decompression Speedcompress-zstd: 8 - Decompression Speedvpxenc: Speed 5 - Bosphorus 1080pdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Decompression Speedembree: Pathtracer - Crownabcno smt ano smt bsmt asmt bsmt csmt d1663.088.74576674.0882.01581671.69675.9153.1149718941003.97483.97.3574616.32941.47109397.78468231434382305.6977.51205134.87556875.5959479783.64203073.151.564611950323.96621015.340.263997174.7242.95811122.3942.98930.55527716257100.251009.84860.29021895493695.76962.149.729494.22499.321.87777907.4894914.179.76440.8533911.245909.691149.7446619.7531314.836127.978.115908.891701.7401.7766312709034.0528.471671.4674495.1792961851566.5640.415446540.4188.710.3570170.534450.304972518.4620106.5158156.362794694.3710.56929264581.170040.6933675750.748.344143044.2110473084.091.8314740.636861088.890.31140541.429.37112346.360.711588487886018128283.2925.76812653709.645565.1812192.880633.0399.149310.08115.3018188.537811.36287.9631238.68240.98330.854438454225616.582860.2522234.95625.91533927600.709.5962104.1178625.1617.360.6570.5610891233.83033.912.569296.9368.82139.92307.4969.3328.623334.9289892.7143.3134.826228.70773766011115.76021116.42680.9310.7381.983.451.09197.10185.071547.4647.6881.1691.3710.1938.5108.677514.699.22319.981577.336330.03919.1152.1377119.20678873.581472.61704.85.821383.712.3361514.81669.329.3730.592412.441677.91536.61664.759.1957671.6541.9234670.202668.9524.2785816313126.86498.677.2824315.543085.53109356.78466888888382304.0467.21217072.27556797.2760031543.97203147.151.562571913590.77621003.920.263373223.2942.83481132.3542.8970.55196616168655.171012.34940.29138718955118.195.48962.6349.729496.95500.831.72975903.2724915.429.75439.0373912.114917.077149.8207619.9442316.290728.238.135894.271685.26401.2053314418461.0228.421675.9674353.4186203521573.74960.578466541.0388.610.3568750.5130680.305731507.7420340.458664.972805836.5210.60928919621.166960.6890645720.98.384162812.4710471889.971.8417240.866792746.340.31272141.3929.29111378.390.7124954876951.3618100474.3625.74912661687.465575.1633193.530533.1398.617510.1365.498181.812311.187289.3355239.92240.9133254555654539616.674559.9222234.68627.20534681610.799.5793104.2908628.3717.370.6670.6810751239.3309512.434290.5968.79139.07303.996928.713734.8189909.4144.2135.096228.48663730021118.43831116.289981.61305.5280.6884.771.09196.54145.085947.35327.7181.4191.4710.1910.8109.112914.679.29319.964877.30529.84519.1151.6003119.37998851.651467.81716.75.8091378.212.3911516.6165129.530.460812.3991673.71537.51668.923.83017669.1461.88686671.233667.954.397216862185.54478.057.3811615.13174.04109609.15468069792382328.597.04217304.19556833.559929401.4203095.441.567421891202.37621041.930.26292183.3342.91521125.8642.74330.55685116537965.141013.73570.28543218961773.1895.59962.1549.779485.65500.31.6386906.1394910.139.76438.4372915.084911.762149.6905625.5512316.628227.898.135898.791704.02400.0427313768771.2628.331679.0374486.0883163791571.82810.417188535.7889.460.3531750.5469910.305599507.9720297.9164299.622794473.7510.58729100231.169190.6926675724.818.374155273.4610475486.391.8226440.766839975.870.31228941.4729.41112186.250.7085634852421.6718088584.6725.6812676101.415495.1696193.311933.198.592210.13845.5007181.724311.160589.547237.96238.77330.154357254456516.518560.4884237.44621.88536551614.248.8004113.5057636.1817.460.6671.1310661241.13049.412.465291.0369.04140.1301.469.9228.569834.9937916.9143.8135.118828.46873569221122.05841117.732380.31309.6484.5582.71.09197.66015.057347.29427.781.191.2910.11926.9109.246714.769.3320.050476.618130.08119.1151.3899119.70268876.11470.51715.65.808138412.3991517.41661.929.4630.491112.4411675.41540.94520.486.19257930.2572.04003913.201973.3184.8788347222624.4920.519.252926.017979.0568076.031209611055932248.1840.94328297.04920216.2763581395.67435615.352.196453284433.2829106.630.316665400.0397.39961978.897.44420.66435926755146.692204.75330.31677927408413.1208.82438.29109.0820836.1229.462.022861148.2610556.934.54941.24431151.251147.34308.42941291.9733649.30257.183.9512119.66833.99813.9832456657338.8457.58828.67127833.5276438313107.29570.2916871045.8145.860.3428360.5560690.160361303.5611342.6943094.963746361.5219.17520798040.6642950.403599784.094.92444886.827402514.741.6504147.134244908.830.25484547.9734.59160545.410.4623863192061.6813141129.6817.60610458275.246477.5856131.758638.3872.129813.85596.0118166.272514.463869.1063181.67186.24279.946201846570020.738548.1849196.25666.43478629635.489.8899101.0095665.0014.480.5757.7810981227.52804.210.597269.6257.13155.56288.4159.8331.922931.3187955.6159.6331.914631.327350673955.1631956.008870.02278.6275.576.051.01191.26055.226242.8057.0487.1598.219.18859.9100.241814.289.39304.207573.199328.68919.8145.3548116.15328968.281483.117285.891395.812.4771500.41669.729.6230.421312.3571684.81542.53591.717.98861901.0441.93126903.358939.5474.7930244683906.6819.789.317516.017979.8267451.31213540299925984.2455.84326819.72920642.3465500297.73437065.752.123793282827.5829445.380.293564395.9597.34411963.7597.38380.67057428009942.562203.15730.34049827422305.53210.14438.99109.0420849.75228.012.028331124.7510549.244.54938.90841119.171139.63308.30331292.8038647.759657.193.9512128.38834.17817.6313456651508.0457.49830.04127770.3979135683086.35910.2912291048.7445.730.3472310.6290260.164845308.6710949.945685.283802292.618.83720638310.6513010.4003259758.744.912460416.337372780.721.7005747.434220522.490.25579647.9334.68162294.470.4611743216133.9113192391.8517.58610393402.016526.4174155.735738.6571.887813.90396.065164.812614.417869.3266216.15209.85278.245222846404420.422348.9257183.75649.52468210622.739.9395100.509662.0114.50.5857.2311081234.22865.910.662268.7758.09159.79280.1858.3332.177831.071032.3161.429.754433.6012344040955.2386955.343869.94271.4574.0374.371.01175.31995.701542.83937.1688.1396.679.18892100.419813.729.76303.737773.208728.5219.8145.3627115.63698924.11483.81727.85.8951397.612.311511.51667.629.7130.640312.3741682.81538.88360.7615.42131877.605663269.053133.6511.432612895047.7324.8220.26956.027953.2174978.511225662852946032.8847.58487359.391291689.33136339636.91466292.123.391514329963.261414423.650.591738184.3497.29162564.4897.02491.2004935687958.372230.30990.62177641966139.67208.13439.41108.9320610.23230.213.388331912.1610544.034.54929.02461830.411888.1314.61861306.6677658.211157.293.9512117.16833.15817.7536634718750.4157.72827.07148316.88153172503114.68480.4514721049.1645.710.6744550.9753570.279638464.715914.136266.022333781.3918.81817876820.978590.5385399746.084.92252025310103952.132.7080764.634530948.620.4074365.5645.56173926.920.6725854383314.6520047519.0517.37615359471.737647.5225132.865547.5677.882712.83227.1865139.097313.63773.2924178.49216.15254.141319941416821.032347.506183.99524.04423027500.138.1914121.9232534.8213.90.5357.9613181023.92795.110.381250.7159.3132.67296.7259.0632.295630.95721051.4140.7831.759231.4805388052972.5163970.220275.91302.5576.6477.960.97188.58265.300442.95157.2480.1789.49.28853.8103.088814.089.78303.327273.30618.8145.3966117.12338748.991495.21723.71393.51515.61664.329.5430.74621682.51539.59017.1219.5493034.627.42273249.283222.937.3014712221058.3424.7720.5726.037949.05180407.591231168093946660.8844.38487987.321291704.57135855921.28466609.223.494534329824.381413555.440.600199186.6497.35282520.3896.9061.1813535996976.132245.80360.62090741989522.68208.19439.5108.8920610.62230.063.596431890.2810549.154.54927.6791937.651894.07314.24521306.5839658.34157.313.9612109.64832.7822.1552634995429.6657.67827.03147582.87138306893114.42930.4502611050.3745.660.6756640.9717040.291087564.0515430.1836020.162396325.9319.01517614160.9735550.5517839680.444.952507331.298586858.512.761466.224392934.750.40807465.6946.25175754.350.6727684357453.7619927313.5217.01715341564.168106.8844145.173646.6277.409612.91067.3054136.832414.635668.2924220.79179.56259.141151341370821.739845.9642209.55527.88417882516.098.3506119.5964538.6314.270.5357.5112431122.42513.410.259274.6357.86136.33267.0459.232.241531.00931038.9138.3331.780631.4593401295975.504969.212274.11295.1876.7770.97188.41475.305342.65366.9779.9589.79.28852.7103.190213.579.87304.078273.349619.5145.2745116.48138750.611483.51726.31392.41519.1167129.630.75081676.51540.47633.1618.65943011.219.349813153.223185.6412.01312328933.6524.7921.03015.997993.1877834.391234570512952461.4842.55490300.481295970.28138939958.5468159.213.601944351871.121422505.510.593299184.2496.90772517.2696.99751.2014634602827.772253.14110.63499241972765.93210.19437.51109.3620684.29227.933.365351888.2110603.844.52937.9921926.151925.68315.33181305.7137662.258557.713.9312168.05826.67818.6037639757070.5258.05821.93148047.13139489143132.38380.447821058.5745.310.6761240.995710.275917413.3415077.6934917.392067499.5819.04516139130.9728630.5451189773.264.92517750.478609357.682.7023565.494549313.420.39437765.5946.34173620.310.6767984394194.4919866440.8617.05315228320.077937.355135.890746.5770.141814.24866.1057163.711114.201170.3828178.78218.28249.841998541428220.488448.772192.25524.75416006495.8510.385296.1737530.4715.380.5357.3312351005.82604.610.407243.3657.44135.8288.0958.3433.987929.41631046.6140.8931.927731.3146394852971.8988971.667773.57303.4378.7276.140.96189.40545.277342.55216.9279.6389.169.25860.8102.099913.979.82303.254973.340719.2144.4661116.96898747.831475.21732.313891516.2166429.5430.57841683.71540.97273.1911.75733349.368.260783171.153172.1912.157112728401.4524.7120.972667990.4791735.751231916197951700.1740.56489998.751300693.93137579874.564680063.505634351920.421425621.440.602831182.897.89082516.0996.71691.233436111781.822255.4220.62713341965286.06209.24437.68109.2320679.68228.942.974271957.7710587.34.52931.82231883.761912.74316.03281301.1811660.213257.623.9312172.92828.01816.7562640853365.5458822.5149647.68130980973116.09090.4575541057.7445.340.6721150.9750910.247825394.4913370.3334685.852077119.7619.07717529040.9739840.630639767.994.912527084.8712418451.712.6730366.084569476.860.38922365.4946.25172228.340.6803954403267.4219842969.2817.07915403597.247817.064141.470546.2175.52413.23317.149139.824115.12666.083218.91224.31256.242051441404721.359346.7815181.83515.11415428500.1183407519.3656106.6518527.8015.110.5358.7612381024.82528.710.373270.7657.52135.56256.4558.6632.175831.07271059136.4230.249433.0514404463974.985966.760673.24303.6375.578.850.97183.9495.434142.47987.158088.919.26879.6102.764613.689.81301.919273.619619.6144.9144117.22198864.781479.11731.21391.71513.51664.229.6630.72471677.61543.6OpenBenchmarking.org

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MMAPsmt dsmt csmt bsmt ano smt bno smt acba2K4K6K8K10K7273.197633.169017.128360.763591.714520.481668.921664.751663.081. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba51015202511.7573018.6594019.5490015.421007.988616.192573.830179.195708.74576MIN: 8.17MIN: 11.23MIN: 10.67MIN: 10.4MIN: 3.88MIN: 3.65MIN: 2.72MIN: 4.13MIN: 3.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba70014002100280035003349.363011.213034.623187.00901.04930.26669.15671.65674.09MIN: 3325.75MIN: 2853.26MIN: 2821.2MIN: 3034.82MIN: 864.28MIN: 898.89MIN: 661.71MIN: 664.19MIN: 667.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba36912158.260789.349817.422707.605661.931262.040031.886861.923402.01581MIN: 7.09MIN: 7.73MIN: 6.44MIN: 6.54MIN: 1.77MIN: 1.78MIN: 1.68MIN: 1.72MIN: 1.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba70014002100280035003171.153153.223249.283269.05903.36913.20671.23670.20671.69MIN: 3148.29MIN: 3059.88MIN: 3226.78MIN: 3243.11MIN: 874.93MIN: 884.92MIN: 662.9MIN: 663.54MIN: 664.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba70014002100280035003172.193185.643222.933133.65939.55973.32667.95668.95675.92MIN: 3155.85MIN: 2949.58MIN: 2927.41MIN: 3037.06MIN: 904.78MIN: 937.3MIN: 660.93MIN: 662.29MIN: 668.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba369121512.1571012.013007.3014711.432604.793024.878834.397204.278583.11497MIN: 8.52MIN: 8.02MIN: 5.77MIN: 7.59MIN: 3.4MIN: 3.53MIN: 3.21MIN: 3.37MIN: 2.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Context Switchingsmt dsmt csmt bsmt ano smt bno smt acba10M20M30M40M50M12728401.4512328933.6512221058.3412895047.7344683906.6847222624.4916862185.5416313126.8618941003.971. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: NUMAsmt dsmt csmt bsmt ano smt bno smt acba11022033044055024.7124.7924.7724.8219.7820.51478.05498.67483.901. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba51015202520.9726021.0301020.5720020.269509.317519.252927.381167.282437.35746MIN: 18.21MIN: 17.96MIN: 18.25MIN: 17.68MIN: 8.07MIN: 7.79MIN: 6.77MIN: 6.58MIN: 4.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba481216206.005.996.036.026.016.0115.1015.5416.30MIN: 5.21 / MAX: 25.51MIN: 5.13 / MAX: 31.29MIN: 5.17 / MAX: 38.35MIN: 5.27 / MAX: 25.12MIN: 5.02 / MAX: 36.88MIN: 5.2 / MAX: 37.8MIN: 6.94 / MAX: 60.59MIN: 8.28 / MAX: 57.72MIN: 7.91 / MAX: 51.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba2K4K6K8K10K7990.477993.187949.057953.217979.827979.053174.043085.532941.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pthreadsmt dsmt csmt bsmt ano smt bno smt acba40K80K120K160K200K91735.7577834.39180407.5974978.5167451.3068076.03109609.15109356.78109397.781. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Readsmt dsmt csmt bsmt ano smt bno smt acba300M600M900M1200M1500M1231916197123457051212311680931225662852121354029912096110554680697924668888884682314341. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Matrix Mathsmt dsmt csmt bsmt ano smt bno smt acba200K400K600K800K1000K951700.17952461.48946660.88946032.88925984.24932248.18382328.50382304.04382305.691. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Cachesmt dsmt csmt bsmt ano smt bno smt acba2040608010040.5642.5544.3847.5855.8440.9497.0467.2177.511. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Stresssmt dsmt csmt bsmt ano smt bno smt acba110K220K330K440K550K489998.75490300.48487987.32487359.39326819.72328297.04217304.19217072.27205134.871. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Vector Mathsmt dsmt csmt bsmt ano smt bno smt acba300K600K900K1200K1500K1300693.931295970.281291704.571291689.33920642.34920216.27556833.50556797.27556875.591. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mutexsmt dsmt csmt bsmt ano smt bno smt acba30M60M90M120M150M137579874.56138939958.50135855921.28136339636.9165500297.7363581395.6759929401.4060031543.9759479783.641. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Cryptosmt dsmt csmt bsmt ano smt bno smt acba100K200K300K400K500K468006.00468159.21466609.22466292.12437065.75435615.35203095.44203147.15203073.151. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.81041.62082.43123.24164.0523.505633.601943.494533.391512.123792.196451.567421.562571.56461MIN: 3.19MIN: 3.08MIN: 3.11MIN: 2.93MIN: 1.9MIN: 2MIN: 1.39MIN: 1.41MIN: 1.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: SENDFILEsmt dsmt csmt bsmt ano smt bno smt acba900K1800K2700K3600K4500K4351920.424351871.124329824.384329963.263282827.503284433.201891202.371913590.771950323.961. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Function Callsmt dsmt csmt bsmt ano smt bno smt acba300K600K900K1200K1500K1425621.441422505.511413555.441414423.65829445.38829106.63621041.93621003.92621015.341. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.13560.27120.40680.54240.6780.6028310.5932990.6001990.5917380.2935640.3166650.2629200.2633730.263997MIN: 0.48MIN: 0.39MIN: 0.38MIN: 0.43MIN: 0.22MIN: 0.24MIN: 0.2MIN: 0.2MIN: 0.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Atomicsmt dsmt csmt bsmt ano smt bno smt acba90180270360450182.80184.24186.64184.34395.95400.03183.33223.29174.721. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba2040608010097.8996.9197.3597.2997.3497.4042.9242.8342.96

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc Qsort Data Sortingsmt dsmt csmt bsmt ano smt bno smt acba60012001800240030002516.092517.262520.382564.481963.751978.801125.861132.351122.391. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba2040608010096.7297.0096.9197.0297.3897.4442.7442.9042.99

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.27750.5550.83251.111.38751.2334001.2014601.1813501.2004900.6705740.6643590.5568510.5519660.555277MIN: 1.08MIN: 1.08MIN: 1.08MIN: 1.04MIN: 0.55MIN: 0.56MIN: 0.53MIN: 0.49MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc C String Functionssmt dsmt csmt bsmt ano smt bno smt acba8M16M24M32M40M36111781.8234602827.7735996976.1335687958.3728009942.5626755146.6916537965.1416168655.1716257100.251. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba50010001500200025002255.422253.142245.802230.312203.162204.751013.741012.351009.85

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.14290.28580.42870.57160.71450.6271330.6349920.6209070.6217760.3404980.3167790.2854320.2913870.290200MIN: 0.4MIN: 0.55MIN: 0.41MIN: 0.45MIN: 0.28MIN: 0.25MIN: 0.24MIN: 0.23MIN: 0.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Hashsmt dsmt csmt bsmt ano smt bno smt acba9M18M27M36M45M41965286.0641972765.9341989522.6841966139.6727422305.5327408413.1018961773.1818955118.1018954936.001. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba50100150200250209.24210.19208.19208.13210.14208.8295.5995.4895.761. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba2004006008001000437.68437.51439.50439.41438.99438.29962.15962.63962.10MIN: 394.89 / MAX: 478.58MIN: 400.05 / MAX: 473.91MIN: 424.22 / MAX: 477.66MIN: 410.81 / MAX: 465.4MIN: 427.57 / MAX: 484.31MIN: 416.93 / MAX: 496.86MIN: 888.7 / MAX: 1017.92MIN: 893.43 / MAX: 1015.71MIN: 879.24 / MAX: 1018.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba20406080100109.23109.36108.89108.93109.04109.0849.7749.7249.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba4K8K12K16K20K20679.6820684.2920610.6220610.2320849.7520836.109485.659496.959494.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba110220330440550228.94227.93230.06230.21228.01229.46500.30500.83499.32MIN: 214.92 / MAX: 248.29MIN: 210.46 / MAX: 252.96MIN: 210.58 / MAX: 251.42MIN: 211.56 / MAX: 253.58MIN: 212.99 / MAX: 267.17MIN: 217.93 / MAX: 265.81MIN: 410.04 / MAX: 531.93MIN: 418.76 / MAX: 546.62MIN: 264.54 / MAX: 537.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.80921.61842.42763.23684.0462.974273.365353.596433.388332.028332.022861.638601.729751.87777MIN: 2.3MIN: 2.7MIN: 2.7MIN: 2.76MIN: 1.43MIN: 1.56MIN: 1.2MIN: 1.27MIN: 1.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba4008001200160020001957.771888.211890.281912.161124.751148.26906.14903.27907.49MIN: 1929.47MIN: 1860.29MIN: 1864.29MIN: 1891.03MIN: 1091.36MIN: 1108.42MIN: 898.41MIN: 894.76MIN: 897.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba2K4K6K8K10K10587.3010603.8410549.1510544.0310549.2410556.934910.134915.424914.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba36912154.524.524.544.544.544.549.769.759.76MIN: 4.03 / MAX: 45.52MIN: 4.12 / MAX: 33.17MIN: 4.11 / MAX: 42.14MIN: 4.07 / MAX: 35.16MIN: 4.09 / MAX: 55.78MIN: 4.12 / MAX: 27.91MIN: 4.98 / MAX: 36.12MIN: 5.25 / MAX: 35.53MIN: 5.03 / MAX: 28.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba2004006008001000931.82937.99927.68929.02938.91941.24438.44439.04440.85

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba4008001200160020001883.761926.151937.651830.411119.171151.25915.08912.11911.25MIN: 1850.75MIN: 1901.09MIN: 1908.65MIN: 1807.97MIN: 1086.08MIN: 1070.35MIN: 906.26MIN: 902.43MIN: 903.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba4008001200160020001912.741925.681894.071888.101139.631147.34911.76917.08909.69MIN: 1890.93MIN: 1901.2MIN: 1870.66MIN: 1865.76MIN: 1100.83MIN: 1109.85MIN: 901.88MIN: 909.14MIN: 901.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba70140210280350316.03315.33314.25314.62308.30308.43149.69149.82149.74

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba300600900120015001301.181305.711306.581306.671292.801291.97625.55619.94619.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba140280420560700660.21662.26658.34658.21647.76649.30316.63316.29314.84

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba132639526557.6257.7157.3157.2957.1957.1827.8928.2327.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba2468103.933.933.963.953.953.958.138.138.11MIN: 3.61 / MAX: 23.62MIN: 3.61 / MAX: 42.82MIN: 3.66 / MAX: 32.83MIN: 3.61 / MAX: 34.38MIN: 3.68 / MAX: 42.53MIN: 3.61 / MAX: 38MIN: 3.83 / MAX: 59.87MIN: 5.35 / MAX: 55.84MIN: 5.39 / MAX: 69.871. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba3K6K9K12K15K12172.9212168.0512109.6412117.1612128.3812119.665898.795894.275908.891. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba400800120016002000828.01826.67832.70833.15834.17833.991704.021685.261701.70MIN: 716.19 / MAX: 1018.34MIN: 724.1 / MAX: 1006.75MIN: 723.65 / MAX: 1031.69MIN: 725.84 / MAX: 1017.38MIN: 732.01 / MAX: 1006.46MIN: 723.91 / MAX: 1011.94MIN: 828.99 / MAX: 1969.02MIN: 891.16 / MAX: 1979.37MIN: 1395.71 / MAX: 2063.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba2004006008001000816.76818.60822.16817.75817.63813.98400.04401.21401.78

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mallocsmt dsmt csmt bsmt ano smt bno smt acba140M280M420M560M700M640853365.54639757070.52634995429.66634718750.41456651508.04456657338.84313768771.26314418461.02312709034.051. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba132639526558.0058.0557.6757.7257.4957.5828.3328.4228.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba400800120016002000822.50821.93827.03827.07830.04828.671679.031675.961671.46MIN: 717.65 / MAX: 997.43MIN: 725.25 / MAX: 1010.56MIN: 723.77 / MAX: 1003.25MIN: 724.84 / MAX: 1037.91MIN: 722.6 / MAX: 1015.67MIN: 730.29 / MAX: 1036.1MIN: 865.58 / MAX: 1995.12MIN: 1231.52 / MAX: 1967.75MIN: 924.15 / MAX: 1977.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba30K60K90K120K150K149647.68148047.13147582.87148316.88127770.39127833.5274486.0874353.4174495.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While Writingsmt dsmt csmt bsmt ano smt bno smt acba3M6M9M12M15M13098097139489141383068915317250791356876438318316379862035292961851. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba70014002100280035003116.093132.383114.433114.683086.363107.301571.831573.751566.56

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.13020.26040.39060.52080.6510.4575540.4478200.4502610.4514720.2912290.2916870.4171880.5784660.415446MIN: 0.34MIN: 0.37MIN: 0.37MIN: 0.4MIN: 0.27MIN: 0.27MIN: 0.4MIN: 0.4MIN: 0.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba20040060080010001057.741058.571050.371049.161048.741045.81535.78541.03540.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba2040608010045.3445.3145.6645.7145.7345.8689.4688.6188.71MIN: 39.4 / MAX: 73.83MIN: 39.35 / MAX: 73.19MIN: 38.82 / MAX: 75.65MIN: 39.78 / MAX: 73.33MIN: 39.68 / MAX: 86.83MIN: 39.29 / MAX: 91.13MIN: 42.16 / MAX: 123.4MIN: 47.57 / MAX: 124.67MIN: 44.2 / MAX: 132.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.15210.30420.45630.60840.76050.6721150.6761240.6756640.6744550.3472310.3428360.3531750.3568750.357017MIN: 0.49MIN: 0.49MIN: 0.55MIN: 0.49MIN: 0.3MIN: 0.28MIN: 0.31MIN: 0.31MIN: 0.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.2240.4480.6720.8961.120.9750910.9957100.9717040.9753570.6290260.5560690.5469910.5130680.534450MIN: 0.91MIN: 0.87MIN: 0.82MIN: 0.92MIN: 0.51MIN: 0.46MIN: 0.5MIN: 0.47MIN: 0.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.06880.13760.20640.27520.3440.2478250.2759170.2910870.2796380.1648450.1603610.3055990.3057310.304972MIN: 0.23MIN: 0.23MIN: 0.23MIN: 0.23MIN: 0.15MIN: 0.15MIN: 0.28MIN: 0.28MIN: 0.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MEMFDsmt dsmt csmt bsmt ano smt bno smt acba120240360480600394.49413.34564.05464.70308.67303.56507.97507.74518.461. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Memory Copyingsmt dsmt csmt bsmt ano smt bno smt acba4K8K12K16K20K13370.3315077.6915430.1815914.1010949.9011342.6920297.9120340.4020106.511. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Forkingsmt dsmt csmt bsmt ano smt bno smt acba14K28K42K56K70K34685.8534917.3936020.1636266.0245685.2843094.9664299.6258664.9758156.361. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Futexsmt dsmt csmt bsmt ano smt bno smt acba800K1600K2400K3200K4000K2077119.762067499.582396325.932333781.393802292.603746361.522794473.752805836.522794694.371. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_baresmt dsmt csmt bsmt ano smt bno smt acba51015202519.0819.0519.0218.8218.8419.1810.5910.6110.571. (CXX) g++ options: -O3

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write Randomsmt dsmt csmt bsmt ano smt bno smt acba600K1200K1800K2400K3000K1752904161391317614161787682206383120798042910023289196229264581. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.26330.52660.78991.05321.31650.9739840.9728630.9735550.9785900.6513010.6642951.1691901.1669601.170040MIN: 0.92MIN: 0.92MIN: 0.92MIN: 0.93MIN: 0.62MIN: 0.63MIN: 1.07MIN: 1.07MIN: 1.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.1560.3120.4680.6240.780.6306300.5451180.5517830.5385390.4003250.4035900.6926670.6890640.693367MIN: 0.48MIN: 0.45MIN: 0.49MIN: 0.48MIN: 0.38MIN: 0.38MIN: 0.65MIN: 0.64MIN: 0.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba2K4K6K8K10K9767.999773.269680.449746.089758.749784.095724.815720.905750.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba2468104.914.904.954.924.914.908.378.388.34MIN: 4.51 / MAX: 27.27MIN: 4.5 / MAX: 29.95MIN: 4.54 / MAX: 24.23MIN: 4.52 / MAX: 34.8MIN: 4.49 / MAX: 34.72MIN: 4.46 / MAX: 56.52MIN: 6.83 / MAX: 50.24MIN: 6.4 / MAX: 32.33MIN: 6.61 / MAX: 55.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:5smt dsmt csmt bsmt ano smt bno smt acba900K1800K2700K3600K4500K2527084.872517750.472507331.292520253.002460416.332444886.824155273.464162812.474143044.211. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: System V Message Passingsmt dsmt csmt bsmt ano smt bno smt acba3M6M9M12M15M12418451.718609357.688586858.5110103952.137372780.727402514.7410475486.3910471889.9710473084.091. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.62131.24261.86392.48523.10652.673032.702352.761402.708071.700571.650411.822641.841721.83147MIN: 2.09MIN: 2.27MIN: 2.46MIN: 2.09MIN: 1.5MIN: 1.41MIN: 1.71MIN: 1.76MIN: 1.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Slowsmt dsmt csmt bsmt ano smt bno smt acba153045607566.0865.4966.2264.6347.4347.1340.7640.8640.631. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10smt dsmt csmt bsmt ano smt bno smt acba1.5M3M4.5M6M7.5M4569476.864549313.424392934.754530948.624220522.494244908.836839975.876792746.346861088.891. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.09180.18360.27540.36720.4590.3892230.3943770.4080740.4074300.2557960.2548450.3122890.3127210.311405MIN: 0.29MIN: 0.29MIN: 0.27MIN: 0.27MIN: 0.18MIN: 0.18MIN: 0.28MIN: 0.28MIN: 0.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Mediumsmt dsmt csmt bsmt ano smt bno smt acba153045607565.4965.5965.6965.5647.9347.9741.4741.3941.401. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowsmt dsmt csmt bsmt ano smt bno smt acba112233445546.2546.3446.2545.5634.6834.5929.4129.2929.37

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba40K80K120K160K200K172228.34173620.31175754.35173926.92162294.47160545.41112186.25111378.39112346.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.16030.32060.48090.64120.80150.6803950.6767980.6727680.6725850.4611740.4623860.7085630.7124950.711588MIN: 0.53MIN: 0.53MIN: 0.53MIN: 0.52MIN: 0.43MIN: 0.42MIN: 0.67MIN: 0.68MIN: 0.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100smt dsmt csmt bsmt ano smt bno smt acba1000K2000K3000K4000K5000K4403267.424394194.494357453.764383314.653216133.913192061.684852421.674876951.364878860.001. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Semaphoressmt dsmt csmt bsmt ano smt bno smt acba4M8M12M16M20M19842969.2819866440.8619927313.5220047519.0513192391.8513141129.6818088584.6718100474.3618128283.291. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigsmt dsmt csmt bsmt ano smt bno smt acba61218243017.0817.0517.0217.3817.5917.6125.6825.7525.77

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pollsmt dsmt csmt bsmt ano smt bno smt acba3M6M9M12M15M15403597.2415228320.0715341564.1615359471.7310393402.0110458275.2412676101.4112661687.4612653709.641. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark Scalarsmt dsmt csmt bsmt ano smt bno smt acba2004006008001000781793810764652647549557556MIN: 139 / MAX: 3776MIN: 139 / MAX: 3583MIN: 138 / MAX: 3650MIN: 139 / MAX: 3808MIN: 102 / MAX: 3990MIN: 101 / MAX: 4024MIN: 60 / MAX: 5475MIN: 61 / MAX: 6019MIN: 61 / MAX: 5994

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba2468107.06407.35506.88447.52256.41747.58565.16965.16335.1812

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba4080120160200141.47135.89145.17132.87155.74131.76193.31193.53192.88

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumsmt dsmt csmt bsmt ano smt bno smt acba112233445546.2146.5746.6247.5638.6538.3833.1033.1333.03

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba2040608010075.5270.1477.4177.8871.8972.1398.5998.6299.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba4812162013.2314.2512.9112.8313.9013.8610.1410.1410.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba2468107.14906.10577.30547.18656.06506.01185.50075.49805.3018

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba4080120160200139.82163.71136.83139.10164.81166.27181.72181.81188.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba4812162015.1314.2014.6413.6414.4214.4611.1611.1911.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba2040608010066.0870.3868.2973.2969.3369.1189.5589.3487.96

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastsmt dsmt csmt bsmt ano smt bno smt acba50100150200250218.91178.78220.79178.49216.15181.67237.96239.92238.68

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastsmt dsmt csmt bsmt ano smt bno smt acba50100150200250224.31218.28179.56216.15209.85186.24238.77240.91240.98

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedsmt dsmt csmt bsmt ano smt bno smt acba70140210280350256.2249.8259.1254.1278.2279.9330.1332.0330.81. (CC) gcc options: -O3 -pthread -lz

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update Randomsmt dsmt csmt bsmt ano smt bno smt acba120K240K360K480K600K4205144199854115134131994522284620185435725455565443841. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential Fillsmt dsmt csmt bsmt ano smt bno smt acba120K240K360K480K600K4140474142824137084141684640444657005445655453965422561. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba51015202521.3620.4921.7421.0320.4220.7416.5216.6716.58

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba142842567046.7848.7745.9647.5148.9348.1860.4959.9260.25

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastsmt dsmt csmt bsmt ano smt bno smt acba50100150200250181.83192.25209.55183.99183.75196.25237.44234.68234.95

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runsmt dsmt csmt bsmt ano smt bno smt acba140280420560700515.11524.75527.88524.04649.52666.43621.88627.20625.91MIN: 88.5 / MAX: 6000MIN: 90.23 / MAX: 6000MIN: 90.63 / MAX: 6666.67MIN: 90.5 / MAX: 6666.67MIN: 86.46 / MAX: 6000MIN: 85.11 / MAX: 6666.67MIN: 58.54 / MAX: 6666.67MIN: 58.54 / MAX: 6000MIN: 56.13 / MAX: 7500

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fillsmt dsmt csmt bsmt ano smt bno smt acba110K220K330K440K550K4154284160064178824230274682104786295365515346815339271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cachesmt dsmt csmt bsmt ano smt bno smt acba140280420560700500.12495.85516.09500.13622.73635.48614.24610.79600.70MIN: 69.61 / MAX: 6000MIN: 51.06 / MAX: 6000MIN: 89.55 / MAX: 6000MIN: 65.15 / MAX: 6000MIN: 83.22 / MAX: 5454.55MIN: 85.35 / MAX: 6000MIN: 57.14 / MAX: 6666.67MIN: 58.14 / MAX: 6666.67MIN: 57.75 / MAX: 6000

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba36912159.365610.38528.35068.19149.93959.88998.80049.57939.5962

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba306090120150106.6596.17119.60121.92100.51101.01113.51104.29104.12

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runsmt dsmt csmt bsmt ano smt bno smt acba140280420560700527.80530.47538.63534.82662.01665.00636.18628.37625.16MIN: 81.52 / MAX: 6000MIN: 92.02 / MAX: 5454.55MIN: 89.82 / MAX: 6000MIN: 91.32 / MAX: 6000MIN: 86.33 / MAX: 6000MIN: 85.84 / MAX: 6000MIN: 58.03 / MAX: 6666.67MIN: 58.37 / MAX: 6666.67MIN: 57.69 / MAX: 6000

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4Ksmt dsmt csmt bsmt ano smt bno smt acba4812162015.1115.3814.2713.9014.5014.4817.4617.3717.361. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.14850.2970.44550.5940.74250.530.530.530.530.580.570.660.660.65MIN: 0.5 / MAX: 8.86MIN: 0.5 / MAX: 9.15MIN: 0.5 / MAX: 9.61MIN: 0.5 / MAX: 24.2MIN: 0.5 / MAX: 12.8MIN: 0.5 / MAX: 13.02MIN: 0.31 / MAX: 22.67MIN: 0.32 / MAX: 19.99MIN: 0.32 / MAX: 20.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastsmt dsmt csmt bsmt ano smt bno smt acba163248648058.7657.3357.5157.9657.2357.7871.1370.6870.56

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCsmt dsmt csmt bsmt ano smt bno smt acba30060090012001500123812351243131811081098106610751089MIN: 392 / MAX: 3738MIN: 392 / MAX: 4332MIN: 390 / MAX: 3501MIN: 391 / MAX: 3855MIN: 297 / MAX: 4208MIN: 297 / MAX: 4284MIN: 180 / MAX: 6427MIN: 179 / MAX: 6442MIN: 179 / MAX: 7194

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedsmt dsmt csmt bsmt ano smt bno smt acba300600900120015001024.81005.81122.41023.91234.21227.51241.11239.31233.81. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedsmt dsmt csmt bsmt ano smt bno smt acba70014002100280035002528.72604.62513.42795.12865.92804.23049.43095.03033.91. (CC) gcc options: -O3 -pthread -lz

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilesmt dsmt csmt bsmt ano smt bno smt acba369121510.3710.4110.2610.3810.6610.6012.4712.4312.57

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very Fastsmt dsmt csmt bsmt ano smt bno smt acba60120180240300270.76243.36274.63250.71268.77269.62291.03290.59296.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastsmt dsmt csmt bsmt ano smt bno smt acba153045607557.5257.4457.8659.3058.0957.1369.0468.7968.82

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Slowsmt dsmt csmt bsmt ano smt bno smt acba4080120160200135.56135.80136.33132.67159.79155.56140.10139.07139.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super Fastsmt dsmt csmt bsmt ano smt bno smt acba70140210280350256.45288.09267.04296.72280.18288.41301.40303.99307.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastsmt dsmt csmt bsmt ano smt bno smt acba163248648058.6658.3459.2059.0658.3359.8369.9269.0069.33

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba81624324032.1833.9932.2432.3032.1831.9228.5728.7128.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba81624324031.0729.4231.0130.9631.0731.3234.9934.8234.93

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedsmt dsmt csmt bsmt ano smt bno smt acba20040060080010001059.01046.61038.91051.41032.3955.6916.9909.4892.71. (CC) gcc options: -O3 -pthread -lz

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Mediumsmt dsmt csmt bsmt ano smt bno smt acba4080120160200136.42140.89138.33140.78161.40159.63143.81144.21143.311. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba81624324030.2531.9331.7831.7629.7531.9135.1235.1034.83

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba81624324033.0531.3131.4631.4833.6031.3328.4728.4928.71

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill Syncsmt dsmt csmt bsmt ano smt bno smt acba90K180K270K360K450K4044633948524012953880523440403506733569223730023766011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba2004006008001000974.99971.90975.50972.52955.24955.161122.061118.441115.76

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba2004006008001000966.76971.67969.21970.22955.34956.011117.731116.291116.43

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very Fastsmt dsmt csmt bsmt ano smt bno smt acba2040608010073.2473.5774.1175.9169.9470.0280.3181.6180.901. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra Fastsmt dsmt csmt bsmt ano smt bno smt acba70140210280350303.63303.43295.18302.55271.45278.62309.64305.52310.731. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super Fastsmt dsmt csmt bsmt ano smt bno smt acba2040608010075.5078.7276.7076.6474.0375.5084.5580.6881.901. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra Fastsmt dsmt csmt bsmt ano smt bno smt acba2040608010078.8576.1477.0077.9674.3776.0582.7084.7783.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba0.24530.49060.73590.98121.22650.970.960.970.971.011.011.091.091.09MIN: 0.86 / MAX: 13.31MIN: 0.87 / MAX: 9.72MIN: 0.87 / MAX: 10.96MIN: 0.87 / MAX: 9.78MIN: 0.86 / MAX: 8.72MIN: 0.86 / MAX: 8.97MIN: 0.49 / MAX: 22.06MIN: 0.48 / MAX: 25.82MIN: 0.49 / MAX: 19.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba4080120160200183.95189.41188.41188.58175.32191.26197.66196.54197.10

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamsmt dsmt csmt bsmt ano smt bno smt acba1.28282.56563.84845.13126.4145.43415.27735.30535.30045.70155.22625.05735.08595.0715

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba112233445542.4842.5542.6542.9542.8442.8147.2947.3547.46

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4Ksmt dsmt csmt bsmt ano smt bno smt acba2468107.156.926.977.247.167.047.707.717.681. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowsmt dsmt csmt bsmt ano smt bno smt acba2040608010080.0079.6379.9580.1788.1387.1581.1081.4181.16

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumsmt dsmt csmt bsmt ano smt bno smt acba2040608010088.9189.1689.7089.4096.6798.2191.2991.4791.37

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUsmt dsmt csmt bsmt ano smt bno smt acba36912159.269.259.289.289.189.1810.1110.1010.10MIN: 8.12 / MAX: 30.29MIN: 8.12 / MAX: 19.04MIN: 8.13 / MAX: 24.28MIN: 8.1 / MAX: 22.87MIN: 8.21 / MAX: 24.31MIN: 8.17 / MAX: 31.1MIN: 5.43 / MAX: 36.27MIN: 5.47 / MAX: 28.16MIN: 5.46 / MAX: 35.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedsmt dsmt csmt bsmt ano smt bno smt acba2004006008001000879.6860.8852.7853.8892.0859.9926.9910.8938.51. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba20406080100102.76102.10103.19103.09100.42100.24109.25109.11108.68

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080psmt dsmt csmt bsmt ano smt bno smt acba4812162013.6813.9713.5714.0813.7214.2814.7614.6714.691. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedsmt dsmt csmt bsmt ano smt bno smt acba36912159.819.829.879.789.769.399.309.299.221. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba70140210280350301.92303.25304.08303.33303.74304.21320.05319.96319.98

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba2040608010073.6273.3473.3573.3173.2173.2076.6277.3177.34

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterno smt bno smt acba71421283528.5228.6930.0829.8530.041. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 1080p - Video Preset: Faster

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedsmt dsmt csmt bsmt ano smt bno smt acba51015202519.619.219.518.819.819.819.119.119.11. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba306090120150144.91144.47145.27145.40145.36145.35151.39151.60152.14

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba306090120150117.22116.97116.48117.12115.64116.15119.70119.38119.21

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Socket Activitysmt dsmt csmt bsmt ano smt bno smt acba2K4K6K8K10K8864.788747.838750.618748.998924.108968.288876.108851.658873.581. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedsmt dsmt csmt bsmt ano smt bno smt acba300600900120015001479.11475.21483.51495.21483.81483.11470.51467.81472.61. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedsmt dsmt csmt bsmt ano smt bno smt acba4008001200160020001731.21732.31726.31723.71727.81728.01715.61716.71704.81. (CC) gcc options: -O3 -pthread -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastno smt bno smt acba1.32642.65283.97925.30566.6325.8955.8905.8085.8095.8201. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 4K - Video Preset: Fast

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedsmt dsmt csmt bsmt ano smt bno smt acba300600900120015001391.71389.01392.41393.51397.61395.81384.01378.21383.71. (CC) gcc options: -O3 -pthread -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterno smt bno smt acba369121512.3112.4812.4012.3912.341. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 4K - Video Preset: Faster

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedsmt dsmt csmt bsmt ano smt bno smt acba300600900120015001513.51516.21519.11515.61511.51500.41517.41516.61514.81. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedsmt dsmt csmt bsmt ano smt bno smt acba4008001200160020001664.21664.01671.01664.31667.61669.71661.91651.01669.31. (CC) gcc options: -O3 -pthread -lz

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080psmt dsmt csmt bsmt ano smt bno smt acba71421283529.6629.5429.6029.5429.7129.6229.4629.5029.371. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamsmt dsmt csmt bsmt ano smt bno smt acba71421283530.7230.5830.7530.7530.6430.4230.4930.4630.59

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastno smt bno smt acba369121512.3712.3612.4412.4012.441. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 1080p - Video Preset: Fast

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedsmt dsmt csmt bsmt ano smt bno smt acba4008001200160020001677.61683.71676.51682.51682.81684.81675.41673.71677.91. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedsmt dsmt csmt bsmt ano smt bno smt acba300600900120015001543.61540.91540.41539.51538.81542.51540.91537.51536.61. (CC) gcc options: -O3 -pthread -lz

Stress-NG

Test: x86_64 RdRand

a: The test run did not produce a result. E: stress-ng: error: [1836509] No stress workers invoked (one or more were unsupported)

b: The test run did not produce a result. E: stress-ng: error: [3544257] No stress workers invoked (one or more were unsupported)

c: The test run did not produce a result. E: stress-ng: error: [1223131] No stress workers invoked (one or more were unsupported)

no smt a: The test run did not produce a result. E: stress-ng: error: [4177065] No stress workers invoked (one or more were unsupported)

no smt b: The test run did not produce a result. E: stress-ng: error: [19683] No stress workers invoked (one or more were unsupported)

smt a: The test run did not produce a result. E: stress-ng: error: [55524] No stress workers invoked (one or more were unsupported)

smt b: The test run did not produce a result. E: stress-ng: error: [3903578] No stress workers invoked (one or more were unsupported)

smt c: The test run did not produce a result. E: stress-ng: error: [4007271] No stress workers invoked (one or more were unsupported)

smt d: The test run did not produce a result. E: stress-ng: error: [381360] No stress workers invoked (one or more were unsupported)

Test: IO_uring

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

no smt a: The test run did not produce a result.

no smt b: The test run did not produce a result.

smt a: The test run did not produce a result.

smt b: The test run did not produce a result.

smt c: The test run did not produce a result.

smt d: The test run did not produce a result.

Test: Zlib

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

no smt a: The test run did not produce a result.

no smt b: The test run did not produce a result.

smt a: The test run did not produce a result.

smt b: The test run did not produce a result.

smt c: The test run did not produce a result.

smt d: The test run did not produce a result.

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

Build: allmodconfig

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

no smt a: The test quit with a non-zero exit status.

no smt b: The test quit with a non-zero exit status.

smt a: The test quit with a non-zero exit status.

smt b: The test quit with a non-zero exit status.

smt c: The test quit with a non-zero exit status.

smt d: The test quit with a non-zero exit status.

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer ISPC - Model: Asian Dragon

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Asian Dragon Obj

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Asian Dragon

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer ISPC - Model: Crown

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Crown

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

169 Results Shown

Stress-NG
oneDNN:
  IP Shapes 1D - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 1D - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 1D - u8s8f32 - CPU
Stress-NG:
  Context Switching
  NUMA
oneDNN
OpenVINO:
  Vehicle Detection FP16 - CPU:
    ms
    FPS
Stress-NG
RocksDB
Stress-NG:
  Matrix Math
  CPU Cache
  CPU Stress
  Vector Math
  Mutex
  Crypto
oneDNN
Stress-NG:
  SENDFILE
  Function Call
oneDNN
Stress-NG
Neural Magic DeepSparse
Stress-NG
Neural Magic DeepSparse
oneDNN
Stress-NG
Neural Magic DeepSparse
oneDNN
Stress-NG
OpenVINO:
  Face Detection FP16-INT8 - CPU
  Face Detection FP16 - CPU
  Face Detection FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
  Face Detection FP16-INT8 - CPU
oneDNN:
  IP Shapes 3D - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
Neural Magic DeepSparse
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
OpenVINO:
  Person Detection FP32 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Person Detection FP32 - CPU
Neural Magic DeepSparse
Stress-NG
OpenVINO:
  Person Detection FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
RocksDB
Neural Magic DeepSparse
oneDNN
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
Stress-NG:
  MEMFD
  Memory Copying
  Forking
  Futex
GROMACS
RocksDB
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
Memcached
Stress-NG
oneDNN
Kvazaar
Memcached
oneDNN
Kvazaar
uvg266
OpenVINO
oneDNN
Memcached
Stress-NG
Timed Linux Kernel Compilation
Stress-NG
OpenVKL
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266:
  Bosphorus 1080p - Super Fast
  Bosphorus 1080p - Ultra Fast
Zstd Compression
RocksDB:
  Update Rand
  Seq Fill
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266
ClickHouse
RocksDB
ClickHouse
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
ClickHouse
VP9 libvpx Encoding
OpenVINO
uvg266
OpenVKL
Zstd Compression:
  8 - Compression Speed
  3 - Compression Speed
Timed FFmpeg Compilation
Kvazaar
uvg266
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Super Fast
uvg266
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Zstd Compression
Kvazaar
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
RocksDB
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Kvazaar:
  Bosphorus 4K - Very Fast
  Bosphorus 1080p - Ultra Fast
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
OpenVINO
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
VP9 libvpx Encoding
uvg266:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
OpenVINO
Zstd Compression
Neural Magic DeepSparse
VP9 libvpx Encoding
Zstd Compression
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
VVenC
Zstd Compression
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
Stress-NG
Zstd Compression:
  19 - Decompression Speed
  12 - Decompression Speed
VVenC
Zstd Compression
VVenC
Zstd Compression:
  3 - Decompression Speed
  8 - Decompression Speed
VP9 libvpx Encoding
Neural Magic DeepSparse
VVenC
Zstd Compression:
  8, Long Mode - Decompression Speed
  3, Long Mode - Decompression Speed