9654 new

2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and llvmpipe on Red Hat Enterprise Linux 9.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2303114-NE-9654NEW5019
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 5 Tests
Creator Workloads 8 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
HPC - High Performance Computing 4 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Multi-Core 12 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Server 3 Tests
Server CPU Tests 4 Tests
Video Encoding 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 09 2023
  2 Hours, 22 Minutes
b
March 10 2023
  2 Hours, 23 Minutes
c
March 10 2023
  2 Hours, 22 Minutes
no smt a
March 10 2023
  2 Hours, 24 Minutes
no smt b
March 10 2023
  2 Hours, 24 Minutes
smt a
March 11 2023
  2 Hours, 30 Minutes
smt b
March 11 2023
  2 Hours, 30 Minutes
smt c
March 11 2023
  2 Hours, 30 Minutes
smt d
March 11 2023
  2 Hours, 30 Minutes
Invert Hiding All Results Option
  2 Hours, 26 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


9654 newProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionOpenGLabcno smt ano smt bsmt asmt bsmt csmt dAMD EPYC 9654 96-Core @ 2.40GHz (96 Cores / 192 Threads)AMD Titanite_4G (RTI1004D BIOS)AMD Device 14a4768GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeRed Hat Enterprise Linux 9.15.14.0-162.6.1.el9_1.x86_64 (x86_64)GNOME Shell 40.10X Server 1.20.11GCC 11.3.1 20220421xfs1600x12002 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores)1520GBllvmpipe4.5 Mesa 22.1.5 (LLVM 14.0.6 256 bits)1024x7682 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Python Details- Python 3.9.14Security Details- a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - no smt a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - no smt b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt c: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt d: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcno smt ano smt bsmt asmt bsmt csmt dResult OverviewPhoronix Test Suite100%127%153%180%oneDNNOpenVINOGROMACSMemcachedTimed Linux Kernel CompilationOpenVKLStress-NGClickHouseTimed FFmpeg CompilationNeural Magic DeepSparseVP9 libvpx Encodinguvg266KvazaarRocksDBZstd Compression

9654 newstress-ng: MMAPonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUstress-ng: Context Switchingstress-ng: NUMAonednn: Deconvolution Batch shapes_1d - f32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUstress-ng: Pthreadrocksdb: Rand Readstress-ng: Matrix Mathstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Vector Mathstress-ng: Mutexstress-ng: Cryptoonednn: IP Shapes 3D - f32 - CPUstress-ng: SENDFILEstress-ng: Function Callonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUstress-ng: Atomicdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamstress-ng: Glibc Qsort Data Sortingdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUstress-ng: Glibc C String Functionsdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUstress-ng: Hashopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Person Detection FP32 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Mallocopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Read While Writingdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUstress-ng: MEMFDstress-ng: Memory Copyingstress-ng: Forkingstress-ng: Futexgromacs: MPI CPU - water_GMX50_barerocksdb: Read Rand Write Randonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUmemcached: 1:5stress-ng: System V Message Passingonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUkvazaar: Bosphorus 4K - Slowmemcached: 1:10onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUkvazaar: Bosphorus 4K - Mediumuvg266: Bosphorus 4K - Slowopenvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUmemcached: 1:100stress-ng: Semaphoresbuild-linux-kernel: defconfigstress-ng: Pollopenvkl: vklBenchmark Scalardeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamuvg266: Bosphorus 4K - Mediumdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamuvg266: Bosphorus 1080p - Super Fastuvg266: Bosphorus 1080p - Ultra Fastcompress-zstd: 12 - Compression Speedrocksdb: Update Randrocksdb: Seq Filldeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamuvg266: Bosphorus 1080p - Very Fastclickhouse: 100M Rows Hits Dataset, Second Runrocksdb: Rand Fillclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachedeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamclickhouse: 100M Rows Hits Dataset, Third Runvpxenc: Speed 5 - Bosphorus 4Kopenvino: Age Gender Recognition Retail 0013 FP16 - CPUuvg266: Bosphorus 4K - Ultra Fastopenvkl: vklBenchmark ISPCcompress-zstd: 8 - Compression Speedcompress-zstd: 3 - Compression Speedbuild-ffmpeg: Time To Compilekvazaar: Bosphorus 1080p - Very Fastuvg266: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Super Fastuvg266: Bosphorus 4K - Super Fastdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcompress-zstd: 3, Long Mode - Compression Speedkvazaar: Bosphorus 1080p - Mediumdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamrocksdb: Rand Fill Syncdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastkvazaar: Bosphorus 4K - Super Fastkvazaar: Bosphorus 4K - Ultra Fastopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamvpxenc: Speed 0 - Bosphorus 4Kuvg266: Bosphorus 1080p - Slowuvg266: Bosphorus 1080p - Mediumopenvino: Weld Porosity Detection FP16-INT8 - CPUcompress-zstd: 8, Long Mode - Compression Speeddeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 0 - Bosphorus 1080pcompress-zstd: 19, Long Mode - Compression Speeddeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastercompress-zstd: 19 - Compression Speeddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Socket Activitycompress-zstd: 19 - Decompression Speedcompress-zstd: 12 - Decompression Speedvvenc: Bosphorus 4K - Fastcompress-zstd: 19, Long Mode - Decompression Speedvvenc: Bosphorus 4K - Fastercompress-zstd: 3 - Decompression Speedcompress-zstd: 8 - Decompression Speedvpxenc: Speed 5 - Bosphorus 1080pdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Decompression Speedembree: Pathtracer - Crownabcno smt ano smt bsmt asmt bsmt csmt d1663.088.74576674.0882.01581671.69675.9153.1149718941003.97483.97.3574616.32941.47109397.78468231434382305.6977.51205134.87556875.5959479783.64203073.151.564611950323.96621015.340.263997174.7242.95811122.3942.98930.55527716257100.251009.84860.29021895493695.76962.149.729494.22499.321.87777907.4894914.179.76440.8533911.245909.691149.7446619.7531314.836127.978.115908.891701.7401.7766312709034.0528.471671.4674495.1792961851566.5640.415446540.4188.710.3570170.534450.304972518.4620106.5158156.362794694.3710.56929264581.170040.6933675750.748.344143044.2110473084.091.8314740.636861088.890.31140541.429.37112346.360.711588487886018128283.2925.76812653709.645565.1812192.880633.0399.149310.08115.3018188.537811.36287.9631238.68240.98330.854438454225616.582860.2522234.95625.91533927600.709.5962104.1178625.1617.360.6570.5610891233.83033.912.569296.9368.82139.92307.4969.3328.623334.9289892.7143.3134.826228.70773766011115.76021116.42680.9310.7381.983.451.09197.10185.071547.4647.6881.1691.3710.1938.5108.677514.699.22319.981577.336330.03919.1152.1377119.20678873.581472.61704.85.821383.712.3361514.81669.329.3730.592412.441677.91536.61664.759.1957671.6541.9234670.202668.9524.2785816313126.86498.677.2824315.543085.53109356.78466888888382304.0467.21217072.27556797.2760031543.97203147.151.562571913590.77621003.920.263373223.2942.83481132.3542.8970.55196616168655.171012.34940.29138718955118.195.48962.6349.729496.95500.831.72975903.2724915.429.75439.0373912.114917.077149.8207619.9442316.290728.238.135894.271685.26401.2053314418461.0228.421675.9674353.4186203521573.74960.578466541.0388.610.3568750.5130680.305731507.7420340.458664.972805836.5210.60928919621.166960.6890645720.98.384162812.4710471889.971.8417240.866792746.340.31272141.3929.29111378.390.7124954876951.3618100474.3625.74912661687.465575.1633193.530533.1398.617510.1365.498181.812311.187289.3355239.92240.9133254555654539616.674559.9222234.68627.20534681610.799.5793104.2908628.3717.370.6670.6810751239.3309512.434290.5968.79139.07303.996928.713734.8189909.4144.2135.096228.48663730021118.43831116.289981.61305.5280.6884.771.09196.54145.085947.35327.7181.4191.4710.1910.8109.112914.679.29319.964877.30529.84519.1151.6003119.37998851.651467.81716.75.8091378.212.3911516.6165129.530.460812.3991673.71537.51668.923.83017669.1461.88686671.233667.954.397216862185.54478.057.3811615.13174.04109609.15468069792382328.597.04217304.19556833.559929401.4203095.441.567421891202.37621041.930.26292183.3342.91521125.8642.74330.55685116537965.141013.73570.28543218961773.1895.59962.1549.779485.65500.31.6386906.1394910.139.76438.4372915.084911.762149.6905625.5512316.628227.898.135898.791704.02400.0427313768771.2628.331679.0374486.0883163791571.82810.417188535.7889.460.3531750.5469910.305599507.9720297.9164299.622794473.7510.58729100231.169190.6926675724.818.374155273.4610475486.391.8226440.766839975.870.31228941.4729.41112186.250.7085634852421.6718088584.6725.6812676101.415495.1696193.311933.198.592210.13845.5007181.724311.160589.547237.96238.77330.154357254456516.518560.4884237.44621.88536551614.248.8004113.5057636.1817.460.6671.1310661241.13049.412.465291.0369.04140.1301.469.9228.569834.9937916.9143.8135.118828.46873569221122.05841117.732380.31309.6484.5582.71.09197.66015.057347.29427.781.191.2910.11926.9109.246714.769.3320.050476.618130.08119.1151.3899119.70268876.11470.51715.65.808138412.3991517.41661.929.4630.491112.4411675.41540.94520.486.19257930.2572.04003913.201973.3184.8788347222624.4920.519.252926.017979.0568076.031209611055932248.1840.94328297.04920216.2763581395.67435615.352.196453284433.2829106.630.316665400.0397.39961978.897.44420.66435926755146.692204.75330.31677927408413.1208.82438.29109.0820836.1229.462.022861148.2610556.934.54941.24431151.251147.34308.42941291.9733649.30257.183.9512119.66833.99813.9832456657338.8457.58828.67127833.5276438313107.29570.2916871045.8145.860.3428360.5560690.160361303.5611342.6943094.963746361.5219.17520798040.6642950.403599784.094.92444886.827402514.741.6504147.134244908.830.25484547.9734.59160545.410.4623863192061.6813141129.6817.60610458275.246477.5856131.758638.3872.129813.85596.0118166.272514.463869.1063181.67186.24279.946201846570020.738548.1849196.25666.43478629635.489.8899101.0095665.0014.480.5757.7810981227.52804.210.597269.6257.13155.56288.4159.8331.922931.3187955.6159.6331.914631.327350673955.1631956.008870.02278.6275.576.051.01191.26055.226242.8057.0487.1598.219.18859.9100.241814.289.39304.207573.199328.68919.8145.3548116.15328968.281483.117285.891395.812.4771500.41669.729.6230.421312.3571684.81542.53591.717.98861901.0441.93126903.358939.5474.7930244683906.6819.789.317516.017979.8267451.31213540299925984.2455.84326819.72920642.3465500297.73437065.752.123793282827.5829445.380.293564395.9597.34411963.7597.38380.67057428009942.562203.15730.34049827422305.53210.14438.99109.0420849.75228.012.028331124.7510549.244.54938.90841119.171139.63308.30331292.8038647.759657.193.9512128.38834.17817.6313456651508.0457.49830.04127770.3979135683086.35910.2912291048.7445.730.3472310.6290260.164845308.6710949.945685.283802292.618.83720638310.6513010.4003259758.744.912460416.337372780.721.7005747.434220522.490.25579647.9334.68162294.470.4611743216133.9113192391.8517.58610393402.016526.4174155.735738.6571.887813.90396.065164.812614.417869.3266216.15209.85278.245222846404420.422348.9257183.75649.52468210622.739.9395100.509662.0114.50.5857.2311081234.22865.910.662268.7758.09159.79280.1858.3332.177831.071032.3161.429.754433.6012344040955.2386955.343869.94271.4574.0374.371.01175.31995.701542.83937.1688.1396.679.18892100.419813.729.76303.737773.208728.5219.8145.3627115.63698924.11483.81727.85.8951397.612.311511.51667.629.7130.640312.3741682.81538.88360.7615.42131877.605663269.053133.6511.432612895047.7324.8220.26956.027953.2174978.511225662852946032.8847.58487359.391291689.33136339636.91466292.123.391514329963.261414423.650.591738184.3497.29162564.4897.02491.2004935687958.372230.30990.62177641966139.67208.13439.41108.9320610.23230.213.388331912.1610544.034.54929.02461830.411888.1314.61861306.6677658.211157.293.9512117.16833.15817.7536634718750.4157.72827.07148316.88153172503114.68480.4514721049.1645.710.6744550.9753570.279638464.715914.136266.022333781.3918.81817876820.978590.5385399746.084.92252025310103952.132.7080764.634530948.620.4074365.5645.56173926.920.6725854383314.6520047519.0517.37615359471.737647.5225132.865547.5677.882712.83227.1865139.097313.63773.2924178.49216.15254.141319941416821.032347.506183.99524.04423027500.138.1914121.9232534.8213.90.5357.9613181023.92795.110.381250.7159.3132.67296.7259.0632.295630.95721051.4140.7831.759231.4805388052972.5163970.220275.91302.5576.6477.960.97188.58265.300442.95157.2480.1789.49.28853.8103.088814.089.78303.327273.30618.8145.3966117.12338748.991495.21723.71393.51515.61664.329.5430.74621682.51539.59017.1219.5493034.627.42273249.283222.937.3014712221058.3424.7720.5726.037949.05180407.591231168093946660.8844.38487987.321291704.57135855921.28466609.223.494534329824.381413555.440.600199186.6497.35282520.3896.9061.1813535996976.132245.80360.62090741989522.68208.19439.5108.8920610.62230.063.596431890.2810549.154.54927.6791937.651894.07314.24521306.5839658.34157.313.9612109.64832.7822.1552634995429.6657.67827.03147582.87138306893114.42930.4502611050.3745.660.6756640.9717040.291087564.0515430.1836020.162396325.9319.01517614160.9735550.5517839680.444.952507331.298586858.512.761466.224392934.750.40807465.6946.25175754.350.6727684357453.7619927313.5217.01715341564.168106.8844145.173646.6277.409612.91067.3054136.832414.635668.2924220.79179.56259.141151341370821.739845.9642209.55527.88417882516.098.3506119.5964538.6314.270.5357.5112431122.42513.410.259274.6357.86136.33267.0459.232.241531.00931038.9138.3331.780631.4593401295975.504969.212274.11295.1876.7770.97188.41475.305342.65366.9779.9589.79.28852.7103.190213.579.87304.078273.349619.5145.2745116.48138750.611483.51726.31392.41519.1167129.630.75081676.51540.47633.1618.65943011.219.349813153.223185.6412.01312328933.6524.7921.03015.997993.1877834.391234570512952461.4842.55490300.481295970.28138939958.5468159.213.601944351871.121422505.510.593299184.2496.90772517.2696.99751.2014634602827.772253.14110.63499241972765.93210.19437.51109.3620684.29227.933.365351888.2110603.844.52937.9921926.151925.68315.33181305.7137662.258557.713.9312168.05826.67818.6037639757070.5258.05821.93148047.13139489143132.38380.447821058.5745.310.6761240.995710.275917413.3415077.6934917.392067499.5819.04516139130.9728630.5451189773.264.92517750.478609357.682.7023565.494549313.420.39437765.5946.34173620.310.6767984394194.4919866440.8617.05315228320.077937.355135.890746.5770.141814.24866.1057163.711114.201170.3828178.78218.28249.841998541428220.488448.772192.25524.75416006495.8510.385296.1737530.4715.380.5357.3312351005.82604.610.407243.3657.44135.8288.0958.3433.987929.41631046.6140.8931.927731.3146394852971.8988971.667773.57303.4378.7276.140.96189.40545.277342.55216.9279.6389.169.25860.8102.099913.979.82303.254973.340719.2144.4661116.96898747.831475.21732.313891516.2166429.5430.57841683.71540.97273.1911.75733349.368.260783171.153172.1912.157112728401.4524.7120.972667990.4791735.751231916197951700.1740.56489998.751300693.93137579874.564680063.505634351920.421425621.440.602831182.897.89082516.0996.71691.233436111781.822255.4220.62713341965286.06209.24437.68109.2320679.68228.942.974271957.7710587.34.52931.82231883.761912.74316.03281301.1811660.213257.623.9312172.92828.01816.7562640853365.5458822.5149647.68130980973116.09090.4575541057.7445.340.6721150.9750910.247825394.4913370.3334685.852077119.7619.07717529040.9739840.630639767.994.912527084.8712418451.712.6730366.084569476.860.38922365.4946.25172228.340.6803954403267.4219842969.2817.07915403597.247817.064141.470546.2175.52413.23317.149139.824115.12666.083218.91224.31256.242051441404721.359346.7815181.83515.11415428500.1183407519.3656106.6518527.8015.110.5358.7612381024.82528.710.373270.7657.52135.56256.4558.6632.175831.07271059136.4230.249433.0514404463974.985966.760673.24303.6375.578.850.97183.9495.434142.47987.158088.919.26879.6102.764613.689.81301.919273.619619.6144.9144117.22198864.781479.11731.21391.71513.51664.229.6630.72471677.61543.6OpenBenchmarking.org

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MMAPabcno smt bno smt asmt dsmt csmt asmt b2K4K6K8K10K1663.081664.751668.923591.714520.487273.197633.168360.769017.121. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUsmt bsmt csmt asmt dbano smt bno smt ac51015202519.5490018.6594015.4210011.757309.195708.745767.988616.192573.83017MIN: 10.67MIN: 11.23MIN: 10.4MIN: 8.17MIN: 4.13MIN: 3.65MIN: 3.88MIN: 3.65MIN: 2.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUsmt dsmt asmt bsmt cno smt ano smt babc70014002100280035003349.363187.003034.623011.21930.26901.04674.09671.65669.15MIN: 3325.75MIN: 3034.82MIN: 2821.2MIN: 2853.26MIN: 898.89MIN: 864.28MIN: 667.01MIN: 664.19MIN: 661.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUsmt csmt dsmt asmt bno smt aano smt bbc36912159.349818.260787.605667.422702.040032.015811.931261.923401.88686MIN: 7.73MIN: 7.09MIN: 6.54MIN: 6.44MIN: 1.78MIN: 1.81MIN: 1.77MIN: 1.72MIN: 1.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUsmt asmt bsmt dsmt cno smt ano smt bacb70014002100280035003269.053249.283171.153153.22913.20903.36671.69671.23670.20MIN: 3243.11MIN: 3226.78MIN: 3148.29MIN: 3059.88MIN: 884.92MIN: 874.93MIN: 664.46MIN: 662.9MIN: 663.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUsmt bsmt csmt dsmt ano smt ano smt babc70014002100280035003222.933185.643172.193133.65973.32939.55675.92668.95667.95MIN: 2927.41MIN: 2949.58MIN: 3155.85MIN: 3037.06MIN: 937.3MIN: 904.78MIN: 668.99MIN: 662.29MIN: 660.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt asmt bno smt ano smt bcba369121512.1571012.0130011.432607.301474.878834.793024.397204.278583.11497MIN: 8.52MIN: 8.02MIN: 7.59MIN: 5.77MIN: 3.53MIN: 3.4MIN: 3.21MIN: 3.37MIN: 2.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Context Switchingsmt bsmt csmt dsmt abcano smt bno smt a10M20M30M40M50M12221058.3412328933.6512728401.4512895047.7316313126.8616862185.5418941003.9744683906.6847222624.491. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: NUMAno smt bno smt asmt dsmt bsmt csmt acab11022033044055019.7820.5124.7124.7724.7924.82478.05483.90498.671. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUsmt csmt dsmt bsmt ano smt bno smt acab51015202521.0301020.9726020.5720020.269509.317519.252927.381167.357467.28243MIN: 17.96MIN: 18.21MIN: 18.25MIN: 17.68MIN: 8.07MIN: 7.79MIN: 6.77MIN: 4.85MIN: 6.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUabcsmt bsmt ano smt bno smt asmt dsmt c4812162016.3015.5415.106.036.026.016.016.005.99MIN: 7.91 / MAX: 51.62MIN: 8.28 / MAX: 57.72MIN: 6.94 / MAX: 60.59MIN: 5.17 / MAX: 38.35MIN: 5.27 / MAX: 25.12MIN: 5.02 / MAX: 36.88MIN: 5.2 / MAX: 37.8MIN: 5.21 / MAX: 25.51MIN: 5.13 / MAX: 31.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUabcsmt bsmt ano smt ano smt bsmt dsmt c2K4K6K8K10K2941.473085.533174.047949.057953.217979.057979.827990.477993.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pthreadno smt bno smt asmt asmt csmt dbacsmt b40K80K120K160K200K67451.3068076.0374978.5177834.3991735.75109356.78109397.78109609.15180407.591. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Readbcano smt ano smt bsmt asmt bsmt dsmt c300M600M900M1200M1500M4668888884680697924682314341209611055121354029912256628521231168093123191619712345705121. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Matrix Mathbacno smt bno smt asmt asmt bsmt dsmt c200K400K600K800K1000K382304.04382305.69382328.50925984.24932248.18946032.88946660.88951700.17952461.481. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Cachesmt dno smt asmt csmt bsmt ano smt bbac2040608010040.5640.9442.5544.3847.5855.8467.2177.5197.041. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Stressabcno smt bno smt asmt asmt bsmt dsmt c110K220K330K440K550K205134.87217072.27217304.19326819.72328297.04487359.39487987.32489998.75490300.481. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Vector Mathbcano smt ano smt bsmt asmt bsmt csmt d300K600K900K1200K1500K556797.27556833.50556875.59920216.27920642.341291689.331291704.571295970.281300693.931. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mutexacbno smt ano smt bsmt bsmt asmt dsmt c30M60M90M120M150M59479783.6459929401.4060031543.9763581395.6765500297.73135855921.28136339636.91137579874.56138939958.501. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Cryptoacbno smt ano smt bsmt asmt bsmt dsmt c100K200K300K400K500K203073.15203095.44203147.15435615.35437065.75466292.12466609.22468006.00468159.211. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUsmt csmt dsmt bsmt ano smt ano smt bcab0.81041.62082.43123.24164.0523.601943.505633.494533.391512.196452.123791.567421.564611.56257MIN: 3.08MIN: 3.19MIN: 3.11MIN: 2.93MIN: 2MIN: 1.9MIN: 1.39MIN: 1.41MIN: 1.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: SENDFILEcbano smt bno smt asmt bsmt asmt csmt d900K1800K2700K3600K4500K1891202.371913590.771950323.963282827.503284433.204329824.384329963.264351871.124351920.421. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Function Callbacno smt ano smt bsmt bsmt asmt csmt d300K600K900K1200K1500K621003.92621015.34621041.93829106.63829445.381413555.441414423.651422505.511425621.441. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUsmt dsmt bsmt csmt ano smt ano smt babc0.13560.27120.40680.54240.6780.6028310.6001990.5932990.5917380.3166650.2935640.2639970.2633730.262920MIN: 0.48MIN: 0.38MIN: 0.39MIN: 0.43MIN: 0.24MIN: 0.22MIN: 0.18MIN: 0.2MIN: 0.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Atomicasmt dcsmt csmt asmt bbno smt bno smt a90180270360450174.72182.80183.33184.24184.34186.64223.29395.95400.031. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambcasmt csmt ano smt bsmt bno smt asmt d2040608010042.8342.9242.9696.9197.2997.3497.3597.4097.89

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc Qsort Data Sortingacbno smt bno smt asmt dsmt csmt bsmt a60012001800240030001122.391125.861132.351963.751978.802516.092517.262520.382564.481. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcbasmt dsmt bsmt csmt ano smt bno smt a2040608010042.7442.9042.9996.7296.9197.0097.0297.3897.44

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUsmt dsmt csmt asmt bno smt bno smt acab0.27750.5550.83251.111.38751.2334001.2014601.2004901.1813500.6705740.6643590.5568510.5552770.551966MIN: 1.08MIN: 1.08MIN: 1.04MIN: 1.08MIN: 0.55MIN: 0.56MIN: 0.53MIN: 0.53MIN: 0.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc C String Functionsbacno smt ano smt bsmt csmt asmt bsmt d8M16M24M32M40M16168655.1716257100.2516537965.1426755146.6928009942.5634602827.7735687958.3735996976.1336111781.821. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcno smt bno smt asmt asmt bsmt csmt d50010001500200025001009.851012.351013.742203.162204.752230.312245.802253.142255.42

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUsmt csmt dsmt asmt bno smt bno smt abac0.14290.28580.42870.57160.71450.6349920.6271330.6217760.6209070.3404980.3167790.2913870.2902000.285432MIN: 0.55MIN: 0.4MIN: 0.45MIN: 0.41MIN: 0.28MIN: 0.25MIN: 0.23MIN: 0.25MIN: 0.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Hashabcno smt ano smt bsmt dsmt asmt csmt b9M18M27M36M45M18954936.0018955118.1018961773.1827408413.1027422305.5341965286.0641966139.6741972765.9341989522.681. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUbcasmt asmt bno smt asmt dno smt bsmt c5010015020025095.4895.5995.76208.13208.19208.82209.24210.14210.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUbcasmt bsmt ano smt bno smt asmt dsmt c2004006008001000962.63962.15962.10439.50439.41438.99438.29437.68437.51MIN: 893.43 / MAX: 1015.71MIN: 888.7 / MAX: 1017.92MIN: 879.24 / MAX: 1018.81MIN: 424.22 / MAX: 477.66MIN: 410.81 / MAX: 465.4MIN: 427.57 / MAX: 484.31MIN: 416.93 / MAX: 496.86MIN: 394.89 / MAX: 478.58MIN: 400.05 / MAX: 473.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUabcsmt bsmt ano smt bno smt asmt dsmt c2040608010049.7249.7249.77108.89108.93109.04109.08109.23109.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUcabsmt asmt bsmt dsmt cno smt ano smt b4K8K12K16K20K9485.659494.229496.9520610.2320610.6220679.6820684.2920836.1020849.751. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUbcasmt asmt bno smt asmt dno smt bsmt c110220330440550500.83500.30499.32230.21230.06229.46228.94228.01227.93MIN: 418.76 / MAX: 546.62MIN: 410.04 / MAX: 531.93MIN: 264.54 / MAX: 537.26MIN: 211.56 / MAX: 253.58MIN: 210.58 / MAX: 251.42MIN: 217.93 / MAX: 265.81MIN: 214.92 / MAX: 248.29MIN: 212.99 / MAX: 267.17MIN: 210.46 / MAX: 252.961. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUsmt bsmt asmt csmt dno smt bno smt aabc0.80921.61842.42763.23684.0463.596433.388333.365352.974272.028332.022861.877771.729751.63860MIN: 2.7MIN: 2.76MIN: 2.7MIN: 2.3MIN: 1.43MIN: 1.56MIN: 1.23MIN: 1.27MIN: 1.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUsmt dsmt asmt bsmt cno smt ano smt bacb4008001200160020001957.771912.161890.281888.211148.261124.75907.49906.14903.27MIN: 1929.47MIN: 1891.03MIN: 1864.29MIN: 1860.29MIN: 1108.42MIN: 1091.36MIN: 897.95MIN: 898.41MIN: 894.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUcabsmt asmt bno smt bno smt asmt dsmt c2K4K6K8K10K4910.134914.174915.4210544.0310549.1510549.2410556.9310587.3010603.841. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUcabsmt bsmt ano smt bno smt asmt dsmt c36912159.769.769.754.544.544.544.544.524.52MIN: 4.98 / MAX: 36.12MIN: 5.03 / MAX: 28.1MIN: 5.25 / MAX: 35.53MIN: 4.11 / MAX: 42.14MIN: 4.07 / MAX: 35.16MIN: 4.09 / MAX: 55.78MIN: 4.12 / MAX: 27.91MIN: 4.03 / MAX: 45.52MIN: 4.12 / MAX: 33.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcbasmt bsmt asmt dsmt cno smt bno smt a2004006008001000438.44439.04440.85927.68929.02931.82937.99938.91941.24

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUsmt bsmt csmt dsmt ano smt ano smt bcba4008001200160020001937.651926.151883.761830.411151.251119.17915.08912.11911.25MIN: 1908.65MIN: 1901.09MIN: 1850.75MIN: 1807.97MIN: 1070.35MIN: 1086.08MIN: 906.26MIN: 902.43MIN: 903.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUsmt csmt dsmt bsmt ano smt ano smt bbca4008001200160020001925.681912.741894.071888.101147.341139.63917.08911.76909.69MIN: 1901.2MIN: 1890.93MIN: 1870.66MIN: 1865.76MIN: 1109.85MIN: 1100.83MIN: 909.14MIN: 901.88MIN: 901.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcabno smt bno smt asmt bsmt asmt csmt d70140210280350149.69149.74149.82308.30308.43314.25314.62315.33316.03

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt dsmt csmt bsmt a30060090012001500619.75619.94625.551291.971292.801301.181305.711306.581306.67

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabcno smt bno smt asmt asmt bsmt dsmt c140280420560700314.84316.29316.63647.76649.30658.21658.34660.21662.26

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUcabno smt ano smt bsmt asmt bsmt dsmt c132639526527.8927.9728.2357.1857.1957.2957.3157.6257.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUcbasmt bsmt ano smt bno smt asmt dsmt c2468108.138.138.113.963.953.953.953.933.93MIN: 3.83 / MAX: 59.87MIN: 5.35 / MAX: 55.84MIN: 5.39 / MAX: 69.87MIN: 3.66 / MAX: 32.83MIN: 3.61 / MAX: 34.38MIN: 3.68 / MAX: 42.53MIN: 3.61 / MAX: 38MIN: 3.61 / MAX: 23.62MIN: 3.61 / MAX: 42.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUbcasmt bsmt ano smt ano smt bsmt csmt d3K6K9K12K15K5894.275898.795908.8912109.6412117.1612119.6612128.3812168.0512172.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUcabno smt bno smt asmt asmt bsmt dsmt c4008001200160020001704.021701.701685.26834.17833.99833.15832.70828.01826.67MIN: 828.99 / MAX: 1969.02MIN: 1395.71 / MAX: 2063.97MIN: 891.16 / MAX: 1979.37MIN: 732.01 / MAX: 1006.46MIN: 723.91 / MAX: 1011.94MIN: 725.84 / MAX: 1017.38MIN: 723.65 / MAX: 1031.69MIN: 716.19 / MAX: 1018.34MIN: 724.1 / MAX: 1006.751. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcbano smt asmt dno smt bsmt asmt csmt b2004006008001000400.04401.21401.78813.98816.76817.63817.75818.60822.16

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mallocacbno smt bno smt asmt asmt bsmt csmt d140M280M420M560M700M312709034.05313768771.26314418461.02456651508.04456657338.84634718750.41634995429.66639757070.52640853365.541. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUcbano smt bno smt asmt bsmt asmt dsmt c132639526528.3328.4228.4757.4957.5857.6757.7258.0058.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUcbano smt bno smt asmt asmt bsmt dsmt c4008001200160020001679.031675.961671.46830.04828.67827.07827.03822.50821.93MIN: 865.58 / MAX: 1995.12MIN: 1231.52 / MAX: 1967.75MIN: 924.15 / MAX: 1977.46MIN: 722.6 / MAX: 1015.67MIN: 730.29 / MAX: 1036.1MIN: 724.84 / MAX: 1037.91MIN: 723.77 / MAX: 1003.25MIN: 717.65 / MAX: 997.43MIN: 725.25 / MAX: 1010.561. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUbcano smt bno smt asmt bsmt csmt asmt d30K60K90K120K150K74353.4174486.0874495.17127770.39127833.52147582.87148047.13148316.88149647.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While Writingno smt ano smt bcbasmt dsmt bsmt csmt a3M6M9M12M15M76438317913568831637986203529296185130980971383068913948914153172501. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamacbno smt bno smt asmt bsmt asmt dsmt c70014002100280035001566.561571.831573.753086.363107.303114.433114.683116.093132.38

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUbsmt dsmt asmt bsmt ccano smt ano smt b0.13020.26040.39060.52080.6510.5784660.4575540.4514720.4502610.4478200.4171880.4154460.2916870.291229MIN: 0.4MIN: 0.34MIN: 0.4MIN: 0.37MIN: 0.37MIN: 0.4MIN: 0.4MIN: 0.27MIN: 0.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUcabno smt ano smt bsmt asmt bsmt dsmt c2004006008001000535.78540.41541.031045.811048.741049.161050.371057.741058.571. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUcabno smt ano smt bsmt asmt bsmt dsmt c2040608010089.4688.7188.6145.8645.7345.7145.6645.3445.31MIN: 42.16 / MAX: 123.4MIN: 44.2 / MAX: 132.86MIN: 47.57 / MAX: 124.67MIN: 39.29 / MAX: 91.13MIN: 39.68 / MAX: 86.83MIN: 39.78 / MAX: 73.33MIN: 38.82 / MAX: 75.65MIN: 39.4 / MAX: 73.83MIN: 39.35 / MAX: 73.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUsmt csmt bsmt asmt dabcno smt bno smt a0.15210.30420.45630.60840.76050.6761240.6756640.6744550.6721150.3570170.3568750.3531750.3472310.342836MIN: 0.49MIN: 0.55MIN: 0.49MIN: 0.49MIN: 0.31MIN: 0.31MIN: 0.31MIN: 0.3MIN: 0.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUsmt csmt asmt dsmt bno smt bno smt acab0.2240.4480.6720.8961.120.9957100.9753570.9750910.9717040.6290260.5560690.5469910.5344500.513068MIN: 0.87MIN: 0.92MIN: 0.91MIN: 0.82MIN: 0.51MIN: 0.46MIN: 0.5MIN: 0.49MIN: 0.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUbcasmt bsmt asmt csmt dno smt bno smt a0.06880.13760.20640.27520.3440.3057310.3055990.3049720.2910870.2796380.2759170.2478250.1648450.160361MIN: 0.28MIN: 0.28MIN: 0.28MIN: 0.23MIN: 0.23MIN: 0.23MIN: 0.23MIN: 0.15MIN: 0.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MEMFDno smt ano smt bsmt dsmt csmt abcasmt b120240360480600303.56308.67394.49413.34464.70507.74507.97518.46564.051. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Memory Copyingno smt bno smt asmt dsmt csmt bsmt aacb4K8K12K16K20K10949.9011342.6913370.3315077.6915430.1815914.1020106.5120297.9120340.401. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Forkingsmt dsmt csmt bsmt ano smt ano smt babc14K28K42K56K70K34685.8534917.3936020.1636266.0243094.9645685.2858156.3658664.9764299.621. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Futexsmt csmt dsmt asmt bcabno smt ano smt b800K1600K2400K3200K4000K2067499.582077119.762333781.392396325.932794473.752794694.372805836.523746361.523802292.601. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareacbsmt ano smt bsmt bsmt csmt dno smt a51015202510.5710.5910.6118.8218.8419.0219.0519.0819.181. (CXX) g++ options: -O3

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write Randomsmt csmt dsmt bsmt ano smt bno smt abca600K1200K1800K2400K3000K1613913175290417614161787682206383120798042891962291002329264581. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUacbsmt asmt dsmt bsmt cno smt ano smt b0.26330.52660.78991.05321.31651.1700401.1691901.1669600.9785900.9739840.9735550.9728630.6642950.651301MIN: 1.07MIN: 1.07MIN: 1.07MIN: 0.93MIN: 0.92MIN: 0.92MIN: 0.92MIN: 0.63MIN: 0.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUacbsmt dsmt bsmt csmt ano smt ano smt b0.1560.3120.4680.6240.780.6933670.6926670.6890640.6306300.5517830.5451180.5385390.4035900.400325MIN: 0.64MIN: 0.65MIN: 0.64MIN: 0.48MIN: 0.49MIN: 0.45MIN: 0.48MIN: 0.38MIN: 0.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUbcasmt bsmt ano smt bsmt dsmt cno smt a2K4K6K8K10K5720.905724.815750.749680.449746.089758.749767.999773.269784.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUbcasmt bsmt asmt dno smt bsmt cno smt a2468108.388.378.344.954.924.914.914.904.90MIN: 6.4 / MAX: 32.33MIN: 6.83 / MAX: 50.24MIN: 6.61 / MAX: 55.54MIN: 4.54 / MAX: 24.23MIN: 4.52 / MAX: 34.8MIN: 4.51 / MAX: 27.27MIN: 4.49 / MAX: 34.72MIN: 4.5 / MAX: 29.95MIN: 4.46 / MAX: 56.521. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:5no smt ano smt bsmt bsmt csmt asmt dacb900K1800K2700K3600K4500K2444886.822460416.332507331.292517750.472520253.002527084.874143044.214155273.464162812.471. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: System V Message Passingno smt bno smt asmt bsmt csmt abacsmt d3M6M9M12M15M7372780.727402514.748586858.518609357.6810103952.1310471889.9710473084.0910475486.3912418451.711. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUsmt bsmt asmt csmt dbacno smt bno smt a0.62131.24261.86392.48523.10652.761402.708072.702352.673031.841721.831471.822641.700571.65041MIN: 2.46MIN: 2.09MIN: 2.27MIN: 2.09MIN: 1.76MIN: 1.73MIN: 1.71MIN: 1.5MIN: 1.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Slowacbno smt ano smt bsmt asmt csmt dsmt b153045607540.6340.7640.8647.1347.4364.6365.4966.0866.221. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10no smt bno smt asmt bsmt asmt csmt dbca1.5M3M4.5M6M7.5M4220522.494244908.834392934.754530948.624549313.424569476.866792746.346839975.876861088.891. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUsmt bsmt asmt csmt dbcano smt bno smt a0.09180.18360.27540.36720.4590.4080740.4074300.3943770.3892230.3127210.3122890.3114050.2557960.254845MIN: 0.27MIN: 0.27MIN: 0.29MIN: 0.29MIN: 0.28MIN: 0.28MIN: 0.3MIN: 0.18MIN: 0.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Mediumbacno smt bno smt asmt dsmt asmt csmt b153045607541.3941.4041.4747.9347.9765.4965.5665.5965.691. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowbacno smt ano smt bsmt asmt bsmt dsmt c112233445529.2929.3729.4134.5934.6845.5646.2546.2546.34

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUbcano smt ano smt bsmt dsmt csmt asmt b40K80K120K160K200K111378.39112186.25112346.36160545.41162294.47172228.34173620.31173926.92175754.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUbacsmt dsmt csmt bsmt ano smt ano smt b0.16030.32060.48090.64120.80150.7124950.7115880.7085630.6803950.6767980.6727680.6725850.4623860.461174MIN: 0.68MIN: 0.68MIN: 0.67MIN: 0.53MIN: 0.53MIN: 0.53MIN: 0.52MIN: 0.42MIN: 0.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100no smt ano smt bsmt bsmt asmt csmt dcba1000K2000K3000K4000K5000K3192061.683216133.914357453.764383314.654394194.494403267.424852421.674876951.364878860.001. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Semaphoresno smt ano smt bcbasmt dsmt csmt bsmt a4M8M12M16M20M13141129.6813192391.8518088584.6718100474.3618128283.2919842969.2819866440.8619927313.5220047519.051. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigabcno smt ano smt bsmt asmt dsmt csmt b61218243025.7725.7525.6817.6117.5917.3817.0817.0517.02

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pollno smt bno smt aabcsmt csmt bsmt asmt d3M6M9M12M15M10393402.0110458275.2412653709.6412661687.4612676101.4115228320.0715341564.1615359471.7315403597.241. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark Scalarcabno smt ano smt bsmt asmt dsmt csmt b2004006008001000549556557647652764781793810MIN: 60 / MAX: 5475MIN: 61 / MAX: 5994MIN: 61 / MAX: 6019MIN: 101 / MAX: 4024MIN: 102 / MAX: 3990MIN: 139 / MAX: 3808MIN: 139 / MAX: 3776MIN: 139 / MAX: 3583MIN: 138 / MAX: 3650

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamno smt asmt asmt csmt dsmt bno smt bacb2468107.58567.52257.35507.06406.88446.41745.18125.16965.1633

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamno smt asmt asmt csmt dsmt bno smt bacb4080120160200131.76132.87135.89141.47145.17155.74192.88193.31193.53

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumacbno smt ano smt bsmt dsmt csmt bsmt a112233445533.0333.1033.1338.3838.6546.2146.5746.6247.56

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamsmt cno smt bno smt asmt dsmt bsmt acba2040608010070.1471.8972.1375.5277.4177.8898.5998.6299.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamsmt cno smt bno smt asmt dsmt bsmt acba4812162014.2513.9013.8613.2312.9112.8310.1410.1410.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamsmt bsmt asmt dsmt cno smt bno smt acba2468107.30547.18657.14906.10576.06506.01185.50075.49805.3018

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamsmt bsmt asmt dsmt cno smt bno smt acba4080120160200136.83139.10139.82163.71164.81166.27181.72181.81188.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamsmt dsmt bno smt ano smt bsmt csmt aabc4812162015.1314.6414.4614.4214.2013.6411.3611.1911.16

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamsmt dsmt bno smt ano smt bsmt csmt aabc2040608010066.0868.2969.1169.3370.3873.2987.9689.3489.55

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastsmt asmt cno smt ano smt bsmt dsmt bcab50100150200250178.49178.78181.67216.15218.91220.79237.96238.68239.92

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastsmt bno smt ano smt bsmt asmt csmt dcba50100150200250179.56186.24209.85216.15218.28224.31238.77240.91240.98

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedsmt csmt asmt dsmt bno smt bno smt acab70140210280350249.8254.1256.2259.1278.2279.9330.1330.8332.01. (CC) gcc options: -O3 -pthread -lz

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update Randomsmt bsmt asmt csmt dno smt bno smt acab120K240K360K480K600K4115134131994199854205144522284620185435725443845455561. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential Fillsmt bsmt dsmt asmt cno smt bno smt aacb120K240K360K480K600K4137084140474141684142824640444657005422565445655453961. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamsmt bsmt dsmt ano smt asmt cno smt bbac51015202521.7421.3621.0320.7420.4920.4216.6716.5816.52

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamsmt bsmt dsmt ano smt asmt cno smt bbac142842567045.9646.7847.5148.1848.7748.9359.9260.2560.49

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastsmt dno smt bsmt asmt cno smt asmt bbac50100150200250181.83183.75183.99192.25196.25209.55234.68234.95237.44

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runsmt dsmt asmt csmt bcabno smt bno smt a140280420560700515.11524.04524.75527.88621.88625.91627.20649.52666.43MIN: 88.5 / MAX: 6000MIN: 90.5 / MAX: 6666.67MIN: 90.23 / MAX: 6000MIN: 90.63 / MAX: 6666.67MIN: 58.54 / MAX: 6666.67MIN: 56.13 / MAX: 7500MIN: 58.54 / MAX: 6000MIN: 86.46 / MAX: 6000MIN: 85.11 / MAX: 6666.67

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fillsmt dsmt csmt bsmt ano smt bno smt aabc110K220K330K440K550K4154284160064178824230274682104786295339275346815365511. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cachesmt csmt dsmt asmt babcno smt bno smt a140280420560700495.85500.12500.13516.09600.70610.79614.24622.73635.48MIN: 51.06 / MAX: 6000MIN: 69.61 / MAX: 6000MIN: 65.15 / MAX: 6000MIN: 89.55 / MAX: 6000MIN: 57.75 / MAX: 6000MIN: 58.14 / MAX: 6666.67MIN: 57.14 / MAX: 6666.67MIN: 83.22 / MAX: 5454.55MIN: 85.35 / MAX: 6000

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamsmt cno smt bno smt aabsmt dcsmt bsmt a369121510.38529.93959.88999.59629.57939.36568.80048.35068.1914

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamsmt cno smt bno smt aabsmt dcsmt bsmt a30609012015096.17100.51101.01104.12104.29106.65113.51119.60121.92

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runsmt dsmt csmt asmt babcno smt bno smt a140280420560700527.80530.47534.82538.63625.16628.37636.18662.01665.00MIN: 81.52 / MAX: 6000MIN: 92.02 / MAX: 5454.55MIN: 91.32 / MAX: 6000MIN: 89.82 / MAX: 6000MIN: 57.69 / MAX: 6000MIN: 58.37 / MAX: 6666.67MIN: 58.03 / MAX: 6666.67MIN: 86.33 / MAX: 6000MIN: 85.84 / MAX: 6000

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4Ksmt asmt bno smt ano smt bsmt dsmt cabc4812162013.9014.2714.4814.5015.1115.3817.3617.3717.461. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUcbano smt bno smt asmt dsmt csmt bsmt a0.14850.2970.44550.5940.74250.660.660.650.580.570.530.530.530.53MIN: 0.31 / MAX: 22.67MIN: 0.32 / MAX: 19.99MIN: 0.32 / MAX: 20.68MIN: 0.5 / MAX: 12.8MIN: 0.5 / MAX: 13.02MIN: 0.5 / MAX: 8.86MIN: 0.5 / MAX: 9.15MIN: 0.5 / MAX: 9.61MIN: 0.5 / MAX: 24.21. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastno smt bsmt csmt bno smt asmt asmt dabc163248648057.2357.3357.5157.7857.9658.7670.5670.6871.13

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCcbano smt ano smt bsmt csmt dsmt bsmt a30060090012001500106610751089109811081235123812431318MIN: 180 / MAX: 6427MIN: 179 / MAX: 6442MIN: 179 / MAX: 7194MIN: 297 / MAX: 4284MIN: 297 / MAX: 4208MIN: 392 / MAX: 4332MIN: 392 / MAX: 3738MIN: 390 / MAX: 3501MIN: 391 / MAX: 3855

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedsmt csmt asmt dsmt bno smt aano smt bbc300600900120015001005.81023.91024.81122.41227.51233.81234.21239.31241.11. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedsmt bsmt dsmt csmt ano smt ano smt bacb70014002100280035002513.42528.72604.62795.12804.22865.93033.93049.43095.01. (CC) gcc options: -O3 -pthread -lz

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compileacbno smt bno smt asmt csmt asmt dsmt b369121512.5712.4712.4310.6610.6010.4110.3810.3710.26

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very Fastsmt csmt ano smt bno smt asmt dsmt bbca60120180240300243.36250.71268.77269.62270.76274.63290.59291.03296.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastno smt asmt csmt dsmt bno smt bsmt abac153045607557.1357.4457.5257.8658.0959.3068.7968.8269.04

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Slowsmt asmt dsmt csmt bbacno smt ano smt b4080120160200132.67135.56135.80136.33139.07139.92140.10155.56159.791. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super Fastsmt dsmt bno smt bsmt cno smt asmt acba70140210280350256.45267.04280.18288.09288.41296.72301.40303.99307.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastno smt bsmt csmt dsmt asmt bno smt abac163248648058.3358.3458.6659.0659.2059.8369.0069.3369.92

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamsmt csmt asmt bno smt bsmt dno smt abac81624324033.9932.3032.2432.1832.1831.9228.7128.6228.57

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamsmt csmt asmt bno smt bsmt dno smt abac81624324029.4230.9631.0131.0731.0731.3234.8234.9334.99

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedabcno smt ano smt bsmt bsmt csmt asmt d2004006008001000892.7909.4916.9955.61032.31038.91046.61051.41059.01. (CC) gcc options: -O3 -pthread -lz

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Mediumsmt dsmt bsmt asmt cacbno smt ano smt b4080120160200136.42138.33140.78140.89143.31143.81144.21159.63161.401. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamno smt bsmt dsmt asmt bno smt asmt cabc81624324029.7530.2531.7631.7831.9131.9334.8335.1035.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamno smt bsmt dsmt asmt bno smt asmt cabc81624324033.6033.0531.4831.4631.3331.3128.7128.4928.47

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill Syncno smt bno smt acbasmt asmt csmt bsmt d90K180K270K360K450K3440403506733569223730023766013880523948524012954044631. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcbasmt bsmt dsmt asmt cno smt bno smt a20040060080010001122.061118.441115.76975.50974.99972.52971.90955.24955.16

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcabsmt csmt asmt bsmt dno smt ano smt b20040060080010001117.731116.431116.29971.67970.22969.21966.76956.01955.34

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very Fastno smt bno smt asmt dsmt csmt bsmt acab2040608010069.9470.0273.2473.5774.1175.9180.3180.9081.611. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra Fastno smt bno smt asmt bsmt asmt csmt dbca70140210280350271.45278.62295.18302.55303.43303.63305.52309.64310.731. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super Fastno smt bno smt asmt dsmt asmt bsmt cbac2040608010074.0375.5075.5076.6476.7078.7280.6881.9084.551. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra Fastno smt bno smt asmt csmt bsmt asmt dcab2040608010074.3776.0576.1477.0077.9678.8582.7083.4584.771. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUcbano smt bno smt asmt dsmt bsmt asmt c0.24530.49060.73590.98121.22651.091.091.091.011.010.970.970.970.96MIN: 0.49 / MAX: 22.06MIN: 0.48 / MAX: 25.82MIN: 0.49 / MAX: 19.01MIN: 0.86 / MAX: 8.72MIN: 0.86 / MAX: 8.97MIN: 0.86 / MAX: 13.31MIN: 0.87 / MAX: 10.96MIN: 0.87 / MAX: 9.78MIN: 0.87 / MAX: 9.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamno smt bsmt dsmt bsmt asmt cno smt abac4080120160200175.32183.95188.41188.58189.41191.26196.54197.10197.66

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamno smt bsmt dsmt bsmt asmt cno smt abac1.28282.56563.84845.13126.4145.70155.43415.30535.30045.27735.22625.08595.07155.0573

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcsmt ano smt bno smt asmt bsmt csmt d112233445547.4647.3547.2942.9542.8442.8142.6542.5542.48

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4Ksmt csmt bno smt asmt dno smt bsmt aacb2468106.926.977.047.157.167.247.687.707.711. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowsmt csmt bsmt dsmt acabno smt ano smt b2040608010079.6379.9580.0080.1781.1081.1681.4187.1588.13

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumsmt dsmt csmt asmt bcabno smt bno smt a2040608010088.9189.1689.4089.7091.2991.3791.4796.6798.21

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUcbasmt bsmt asmt dsmt cno smt bno smt a369121510.1110.1010.109.289.289.269.259.189.18MIN: 5.43 / MAX: 36.27MIN: 5.47 / MAX: 28.16MIN: 5.46 / MAX: 35.41MIN: 8.13 / MAX: 24.28MIN: 8.1 / MAX: 22.87MIN: 8.12 / MAX: 30.29MIN: 8.12 / MAX: 19.04MIN: 8.21 / MAX: 24.31MIN: 8.17 / MAX: 31.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedsmt bsmt ano smt asmt csmt dno smt bbca2004006008001000852.7853.8859.9860.8879.6892.0910.8926.9938.51. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcbasmt bsmt asmt dsmt cno smt bno smt a20406080100109.25109.11108.68103.19103.09102.76102.10100.42100.24

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080psmt bsmt dno smt bsmt csmt ano smt abac4812162013.5713.6813.7213.9714.0814.2814.6714.6914.761. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedabcno smt ano smt bsmt asmt dsmt csmt b36912159.229.299.309.399.769.789.819.829.871. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcabno smt asmt bno smt bsmt asmt csmt d70140210280350320.05319.98319.96304.21304.08303.74303.33303.25301.92

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcsmt dsmt bsmt csmt ano smt bno smt a2040608010077.3477.3176.6273.6273.3573.3473.3173.2173.20

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterno smt bno smt abac71421283528.5228.6929.8530.0430.081. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 1080p - Video Preset: Faster

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedsmt aabcsmt csmt bsmt dno smt ano smt b51015202518.819.119.119.119.219.519.619.819.81. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabcsmt ano smt bno smt asmt bsmt dsmt c306090120150152.14151.60151.39145.40145.36145.35145.27144.91144.47

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcbasmt dsmt asmt csmt bno smt ano smt b306090120150119.70119.38119.21117.22117.12116.97116.48116.15115.64

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Socket Activitysmt csmt asmt bbsmt dacno smt bno smt a2K4K6K8K10K8747.838748.998750.618851.658864.788873.588876.108924.108968.281. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedbcasmt csmt dno smt asmt bno smt bsmt a300600900120015001467.81470.51472.61475.21479.11483.11483.51483.81495.21. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedacbsmt asmt bno smt bno smt asmt dsmt c4008001200160020001704.81715.61716.71723.71726.31727.81728.01731.21732.31. (CC) gcc options: -O3 -pthread -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastcbano smt ano smt b1.32642.65283.97925.30566.6325.8085.8095.8205.8905.8951. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 4K - Video Preset: Fast

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedbacsmt csmt dsmt bsmt ano smt ano smt b300600900120015001378.21383.71384.01389.01391.71392.41393.51395.81397.61. (CC) gcc options: -O3 -pthread -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterno smt babcno smt a369121512.3112.3412.3912.4012.481. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 4K - Video Preset: Faster

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedno smt ano smt bsmt dasmt asmt cbcsmt b300600900120015001500.41511.51513.51514.81515.61516.21516.61517.41519.11. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedbcsmt csmt dsmt ano smt bano smt asmt b4008001200160020001651.01661.91664.01664.21664.31667.61669.31669.71671.01. (CC) gcc options: -O3 -pthread -lz

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pacbsmt asmt csmt bno smt asmt dno smt b71421283529.3729.4629.5029.5429.5429.6029.6229.6629.711. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamsmt bsmt asmt dno smt basmt ccbno smt a71421283530.7530.7530.7230.6430.5930.5830.4930.4630.42

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastno smt ano smt bbac369121512.3612.3712.4012.4412.441. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 1080p - Video Preset: Fast

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedbcsmt bsmt dasmt ano smt bsmt cno smt a4008001200160020001673.71675.41676.51677.61677.91682.51682.81683.71684.81. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedabno smt bsmt asmt bcsmt cno smt asmt d300600900120015001536.61537.51538.81539.51540.41540.91540.91542.51543.61. (CC) gcc options: -O3 -pthread -lz

Stress-NG

Test: x86_64 RdRand

a: The test run did not produce a result. E: stress-ng: error: [1836509] No stress workers invoked (one or more were unsupported)

b: The test run did not produce a result. E: stress-ng: error: [3544257] No stress workers invoked (one or more were unsupported)

c: The test run did not produce a result. E: stress-ng: error: [1223131] No stress workers invoked (one or more were unsupported)

no smt a: The test run did not produce a result. E: stress-ng: error: [4177065] No stress workers invoked (one or more were unsupported)

no smt b: The test run did not produce a result. E: stress-ng: error: [19683] No stress workers invoked (one or more were unsupported)

smt a: The test run did not produce a result. E: stress-ng: error: [55524] No stress workers invoked (one or more were unsupported)

smt b: The test run did not produce a result. E: stress-ng: error: [3903578] No stress workers invoked (one or more were unsupported)

smt c: The test run did not produce a result. E: stress-ng: error: [4007271] No stress workers invoked (one or more were unsupported)

smt d: The test run did not produce a result. E: stress-ng: error: [381360] No stress workers invoked (one or more were unsupported)

Test: IO_uring

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

no smt a: The test run did not produce a result.

no smt b: The test run did not produce a result.

smt a: The test run did not produce a result.

smt b: The test run did not produce a result.

smt c: The test run did not produce a result.

smt d: The test run did not produce a result.

Test: Zlib

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

no smt a: The test run did not produce a result.

no smt b: The test run did not produce a result.

smt a: The test run did not produce a result.

smt b: The test run did not produce a result.

smt c: The test run did not produce a result.

smt d: The test run did not produce a result.

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

Build: allmodconfig

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

no smt a: The test quit with a non-zero exit status.

no smt b: The test quit with a non-zero exit status.

smt a: The test quit with a non-zero exit status.

smt b: The test quit with a non-zero exit status.

smt c: The test quit with a non-zero exit status.

smt d: The test quit with a non-zero exit status.

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer ISPC - Model: Asian Dragon

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Asian Dragon Obj

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Asian Dragon

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer ISPC - Model: Crown

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Crown

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

169 Results Shown

Stress-NG
oneDNN:
  IP Shapes 1D - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 1D - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 1D - u8s8f32 - CPU
Stress-NG:
  Context Switching
  NUMA
oneDNN
OpenVINO:
  Vehicle Detection FP16 - CPU:
    ms
    FPS
Stress-NG
RocksDB
Stress-NG:
  Matrix Math
  CPU Cache
  CPU Stress
  Vector Math
  Mutex
  Crypto
oneDNN
Stress-NG:
  SENDFILE
  Function Call
oneDNN
Stress-NG
Neural Magic DeepSparse
Stress-NG
Neural Magic DeepSparse
oneDNN
Stress-NG
Neural Magic DeepSparse
oneDNN
Stress-NG
OpenVINO:
  Face Detection FP16-INT8 - CPU
  Face Detection FP16 - CPU
  Face Detection FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
  Face Detection FP16-INT8 - CPU
oneDNN:
  IP Shapes 3D - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
Neural Magic DeepSparse
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
OpenVINO:
  Person Detection FP32 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Person Detection FP32 - CPU
Neural Magic DeepSparse
Stress-NG
OpenVINO:
  Person Detection FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
RocksDB
Neural Magic DeepSparse
oneDNN
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
Stress-NG:
  MEMFD
  Memory Copying
  Forking
  Futex
GROMACS
RocksDB
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
Memcached
Stress-NG
oneDNN
Kvazaar
Memcached
oneDNN
Kvazaar
uvg266
OpenVINO
oneDNN
Memcached
Stress-NG
Timed Linux Kernel Compilation
Stress-NG
OpenVKL
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266:
  Bosphorus 1080p - Super Fast
  Bosphorus 1080p - Ultra Fast
Zstd Compression
RocksDB:
  Update Rand
  Seq Fill
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266
ClickHouse
RocksDB
ClickHouse
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
ClickHouse
VP9 libvpx Encoding
OpenVINO
uvg266
OpenVKL
Zstd Compression:
  8 - Compression Speed
  3 - Compression Speed
Timed FFmpeg Compilation
Kvazaar
uvg266
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Super Fast
uvg266
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Zstd Compression
Kvazaar
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
RocksDB
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Kvazaar:
  Bosphorus 4K - Very Fast
  Bosphorus 1080p - Ultra Fast
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
OpenVINO
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
VP9 libvpx Encoding
uvg266:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
OpenVINO
Zstd Compression
Neural Magic DeepSparse
VP9 libvpx Encoding
Zstd Compression
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
VVenC
Zstd Compression
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
Stress-NG
Zstd Compression:
  19 - Decompression Speed
  12 - Decompression Speed
VVenC
Zstd Compression
VVenC
Zstd Compression:
  3 - Decompression Speed
  8 - Decompression Speed
VP9 libvpx Encoding
Neural Magic DeepSparse
VVenC
Zstd Compression:
  8, Long Mode - Decompression Speed
  3, Long Mode - Decompression Speed