9654 new

2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and llvmpipe on Red Hat Enterprise Linux 9.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2303114-NE-9654NEW5019
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 5 Tests
Creator Workloads 8 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
HPC - High Performance Computing 4 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Multi-Core 12 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 2 Tests
Server 3 Tests
Server CPU Tests 4 Tests
Video Encoding 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 09 2023
  2 Hours, 22 Minutes
b
March 10 2023
  2 Hours, 23 Minutes
c
March 10 2023
  2 Hours, 22 Minutes
no smt a
March 10 2023
  2 Hours, 24 Minutes
no smt b
March 10 2023
  2 Hours, 24 Minutes
smt a
March 11 2023
  2 Hours, 30 Minutes
smt b
March 11 2023
  2 Hours, 30 Minutes
smt c
March 11 2023
  2 Hours, 30 Minutes
smt d
March 11 2023
  2 Hours, 30 Minutes
Invert Hiding All Results Option
  2 Hours, 26 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


9654 newProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionOpenGLabcno smt ano smt bsmt asmt bsmt csmt dAMD EPYC 9654 96-Core @ 2.40GHz (96 Cores / 192 Threads)AMD Titanite_4G (RTI1004D BIOS)AMD Device 14a4768GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeRed Hat Enterprise Linux 9.15.14.0-162.6.1.el9_1.x86_64 (x86_64)GNOME Shell 40.10X Server 1.20.11GCC 11.3.1 20220421xfs1600x12002 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores)1520GBllvmpipe4.5 Mesa 22.1.5 (LLVM 14.0.6 256 bits)1024x7682 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: alwaysCompiler Details- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa101111Python Details- Python 3.9.14Security Details- a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - no smt a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - no smt b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt a: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt b: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt c: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - smt d: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcno smt ano smt bsmt asmt bsmt csmt dResult OverviewPhoronix Test Suite100%127%153%180%oneDNNOpenVINOGROMACSMemcachedTimed Linux Kernel CompilationOpenVKLStress-NGClickHouseTimed FFmpeg CompilationNeural Magic DeepSparseVP9 libvpx Encodinguvg266KvazaarRocksDBZstd Compression

9654 newstress-ng: MMAPonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUstress-ng: Context Switchingstress-ng: NUMAonednn: Deconvolution Batch shapes_1d - f32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUstress-ng: Pthreadrocksdb: Rand Readstress-ng: Matrix Mathstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Vector Mathstress-ng: Mutexstress-ng: Cryptoonednn: IP Shapes 3D - f32 - CPUstress-ng: SENDFILEstress-ng: Function Callonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUstress-ng: Atomicdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamstress-ng: Glibc Qsort Data Sortingdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUstress-ng: Glibc C String Functionsdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUstress-ng: Hashopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Person Detection FP32 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Mallocopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Read While Writingdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUstress-ng: MEMFDstress-ng: Memory Copyingstress-ng: Forkingstress-ng: Futexgromacs: MPI CPU - water_GMX50_barerocksdb: Read Rand Write Randonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUmemcached: 1:5stress-ng: System V Message Passingonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUkvazaar: Bosphorus 4K - Slowmemcached: 1:10onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUkvazaar: Bosphorus 4K - Mediumuvg266: Bosphorus 4K - Slowopenvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUmemcached: 1:100stress-ng: Semaphoresbuild-linux-kernel: defconfigstress-ng: Pollopenvkl: vklBenchmark Scalardeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamuvg266: Bosphorus 4K - Mediumdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamuvg266: Bosphorus 1080p - Super Fastuvg266: Bosphorus 1080p - Ultra Fastcompress-zstd: 12 - Compression Speedrocksdb: Update Randrocksdb: Seq Filldeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamuvg266: Bosphorus 1080p - Very Fastclickhouse: 100M Rows Hits Dataset, Second Runrocksdb: Rand Fillclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachedeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamclickhouse: 100M Rows Hits Dataset, Third Runvpxenc: Speed 5 - Bosphorus 4Kopenvino: Age Gender Recognition Retail 0013 FP16 - CPUuvg266: Bosphorus 4K - Ultra Fastopenvkl: vklBenchmark ISPCcompress-zstd: 8 - Compression Speedcompress-zstd: 3 - Compression Speedbuild-ffmpeg: Time To Compilekvazaar: Bosphorus 1080p - Very Fastuvg266: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Super Fastuvg266: Bosphorus 4K - Super Fastdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcompress-zstd: 3, Long Mode - Compression Speedkvazaar: Bosphorus 1080p - Mediumdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamrocksdb: Rand Fill Syncdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastkvazaar: Bosphorus 4K - Super Fastkvazaar: Bosphorus 4K - Ultra Fastopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamvpxenc: Speed 0 - Bosphorus 4Kuvg266: Bosphorus 1080p - Slowuvg266: Bosphorus 1080p - Mediumopenvino: Weld Porosity Detection FP16-INT8 - CPUcompress-zstd: 8, Long Mode - Compression Speeddeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 0 - Bosphorus 1080pcompress-zstd: 19, Long Mode - Compression Speeddeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastercompress-zstd: 19 - Compression Speeddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Socket Activitycompress-zstd: 19 - Decompression Speedcompress-zstd: 12 - Decompression Speedvvenc: Bosphorus 4K - Fastcompress-zstd: 19, Long Mode - Decompression Speedvvenc: Bosphorus 4K - Fastercompress-zstd: 3 - Decompression Speedcompress-zstd: 8 - Decompression Speedvpxenc: Speed 5 - Bosphorus 1080pdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Decompression Speedembree: Pathtracer - Crownabcno smt ano smt bsmt asmt bsmt csmt d1663.088.74576674.0882.01581671.69675.9153.1149718941003.97483.97.3574616.32941.47109397.78468231434382305.6977.51205134.87556875.5959479783.64203073.151.564611950323.96621015.340.263997174.7242.95811122.3942.98930.55527716257100.251009.84860.29021895493695.76962.149.729494.22499.321.87777907.4894914.179.76440.8533911.245909.691149.7446619.7531314.836127.978.115908.891701.7401.7766312709034.0528.471671.4674495.1792961851566.5640.415446540.4188.710.3570170.534450.304972518.4620106.5158156.362794694.3710.56929264581.170040.6933675750.748.344143044.2110473084.091.8314740.636861088.890.31140541.429.37112346.360.711588487886018128283.2925.76812653709.645565.1812192.880633.0399.149310.08115.3018188.537811.36287.9631238.68240.98330.854438454225616.582860.2522234.95625.91533927600.709.5962104.1178625.1617.360.6570.5610891233.83033.912.569296.9368.82139.92307.4969.3328.623334.9289892.7143.3134.826228.70773766011115.76021116.42680.9310.7381.983.451.09197.10185.071547.4647.6881.1691.3710.1938.5108.677514.699.22319.981577.336330.03919.1152.1377119.20678873.581472.61704.85.821383.712.3361514.81669.329.3730.592412.441677.91536.61664.759.1957671.6541.9234670.202668.9524.2785816313126.86498.677.2824315.543085.53109356.78466888888382304.0467.21217072.27556797.2760031543.97203147.151.562571913590.77621003.920.263373223.2942.83481132.3542.8970.55196616168655.171012.34940.29138718955118.195.48962.6349.729496.95500.831.72975903.2724915.429.75439.0373912.114917.077149.8207619.9442316.290728.238.135894.271685.26401.2053314418461.0228.421675.9674353.4186203521573.74960.578466541.0388.610.3568750.5130680.305731507.7420340.458664.972805836.5210.60928919621.166960.6890645720.98.384162812.4710471889.971.8417240.866792746.340.31272141.3929.29111378.390.7124954876951.3618100474.3625.74912661687.465575.1633193.530533.1398.617510.1365.498181.812311.187289.3355239.92240.9133254555654539616.674559.9222234.68627.20534681610.799.5793104.2908628.3717.370.6670.6810751239.3309512.434290.5968.79139.07303.996928.713734.8189909.4144.2135.096228.48663730021118.43831116.289981.61305.5280.6884.771.09196.54145.085947.35327.7181.4191.4710.1910.8109.112914.679.29319.964877.30529.84519.1151.6003119.37998851.651467.81716.75.8091378.212.3911516.6165129.530.460812.3991673.71537.51668.923.83017669.1461.88686671.233667.954.397216862185.54478.057.3811615.13174.04109609.15468069792382328.597.04217304.19556833.559929401.4203095.441.567421891202.37621041.930.26292183.3342.91521125.8642.74330.55685116537965.141013.73570.28543218961773.1895.59962.1549.779485.65500.31.6386906.1394910.139.76438.4372915.084911.762149.6905625.5512316.628227.898.135898.791704.02400.0427313768771.2628.331679.0374486.0883163791571.82810.417188535.7889.460.3531750.5469910.305599507.9720297.9164299.622794473.7510.58729100231.169190.6926675724.818.374155273.4610475486.391.8226440.766839975.870.31228941.4729.41112186.250.7085634852421.6718088584.6725.6812676101.415495.1696193.311933.198.592210.13845.5007181.724311.160589.547237.96238.77330.154357254456516.518560.4884237.44621.88536551614.248.8004113.5057636.1817.460.6671.1310661241.13049.412.465291.0369.04140.1301.469.9228.569834.9937916.9143.8135.118828.46873569221122.05841117.732380.31309.6484.5582.71.09197.66015.057347.29427.781.191.2910.11926.9109.246714.769.3320.050476.618130.08119.1151.3899119.70268876.11470.51715.65.808138412.3991517.41661.929.4630.491112.4411675.41540.94520.486.19257930.2572.04003913.201973.3184.8788347222624.4920.519.252926.017979.0568076.031209611055932248.1840.94328297.04920216.2763581395.67435615.352.196453284433.2829106.630.316665400.0397.39961978.897.44420.66435926755146.692204.75330.31677927408413.1208.82438.29109.0820836.1229.462.022861148.2610556.934.54941.24431151.251147.34308.42941291.9733649.30257.183.9512119.66833.99813.9832456657338.8457.58828.67127833.5276438313107.29570.2916871045.8145.860.3428360.5560690.160361303.5611342.6943094.963746361.5219.17520798040.6642950.403599784.094.92444886.827402514.741.6504147.134244908.830.25484547.9734.59160545.410.4623863192061.6813141129.6817.60610458275.246477.5856131.758638.3872.129813.85596.0118166.272514.463869.1063181.67186.24279.946201846570020.738548.1849196.25666.43478629635.489.8899101.0095665.0014.480.5757.7810981227.52804.210.597269.6257.13155.56288.4159.8331.922931.3187955.6159.6331.914631.327350673955.1631956.008870.02278.6275.576.051.01191.26055.226242.8057.0487.1598.219.18859.9100.241814.289.39304.207573.199328.68919.8145.3548116.15328968.281483.117285.891395.812.4771500.41669.729.6230.421312.3571684.81542.53591.717.98861901.0441.93126903.358939.5474.7930244683906.6819.789.317516.017979.8267451.31213540299925984.2455.84326819.72920642.3465500297.73437065.752.123793282827.5829445.380.293564395.9597.34411963.7597.38380.67057428009942.562203.15730.34049827422305.53210.14438.99109.0420849.75228.012.028331124.7510549.244.54938.90841119.171139.63308.30331292.8038647.759657.193.9512128.38834.17817.6313456651508.0457.49830.04127770.3979135683086.35910.2912291048.7445.730.3472310.6290260.164845308.6710949.945685.283802292.618.83720638310.6513010.4003259758.744.912460416.337372780.721.7005747.434220522.490.25579647.9334.68162294.470.4611743216133.9113192391.8517.58610393402.016526.4174155.735738.6571.887813.90396.065164.812614.417869.3266216.15209.85278.245222846404420.422348.9257183.75649.52468210622.739.9395100.509662.0114.50.5857.2311081234.22865.910.662268.7758.09159.79280.1858.3332.177831.071032.3161.429.754433.6012344040955.2386955.343869.94271.4574.0374.371.01175.31995.701542.83937.1688.1396.679.18892100.419813.729.76303.737773.208728.5219.8145.3627115.63698924.11483.81727.85.8951397.612.311511.51667.629.7130.640312.3741682.81538.88360.7615.42131877.605663269.053133.6511.432612895047.7324.8220.26956.027953.2174978.511225662852946032.8847.58487359.391291689.33136339636.91466292.123.391514329963.261414423.650.591738184.3497.29162564.4897.02491.2004935687958.372230.30990.62177641966139.67208.13439.41108.9320610.23230.213.388331912.1610544.034.54929.02461830.411888.1314.61861306.6677658.211157.293.9512117.16833.15817.7536634718750.4157.72827.07148316.88153172503114.68480.4514721049.1645.710.6744550.9753570.279638464.715914.136266.022333781.3918.81817876820.978590.5385399746.084.92252025310103952.132.7080764.634530948.620.4074365.5645.56173926.920.6725854383314.6520047519.0517.37615359471.737647.5225132.865547.5677.882712.83227.1865139.097313.63773.2924178.49216.15254.141319941416821.032347.506183.99524.04423027500.138.1914121.9232534.8213.90.5357.9613181023.92795.110.381250.7159.3132.67296.7259.0632.295630.95721051.4140.7831.759231.4805388052972.5163970.220275.91302.5576.6477.960.97188.58265.300442.95157.2480.1789.49.28853.8103.088814.089.78303.327273.30618.8145.3966117.12338748.991495.21723.71393.51515.61664.329.5430.74621682.51539.59017.1219.5493034.627.42273249.283222.937.3014712221058.3424.7720.5726.037949.05180407.591231168093946660.8844.38487987.321291704.57135855921.28466609.223.494534329824.381413555.440.600199186.6497.35282520.3896.9061.1813535996976.132245.80360.62090741989522.68208.19439.5108.8920610.62230.063.596431890.2810549.154.54927.6791937.651894.07314.24521306.5839658.34157.313.9612109.64832.7822.1552634995429.6657.67827.03147582.87138306893114.42930.4502611050.3745.660.6756640.9717040.291087564.0515430.1836020.162396325.9319.01517614160.9735550.5517839680.444.952507331.298586858.512.761466.224392934.750.40807465.6946.25175754.350.6727684357453.7619927313.5217.01715341564.168106.8844145.173646.6277.409612.91067.3054136.832414.635668.2924220.79179.56259.141151341370821.739845.9642209.55527.88417882516.098.3506119.5964538.6314.270.5357.5112431122.42513.410.259274.6357.86136.33267.0459.232.241531.00931038.9138.3331.780631.4593401295975.504969.212274.11295.1876.7770.97188.41475.305342.65366.9779.9589.79.28852.7103.190213.579.87304.078273.349619.5145.2745116.48138750.611483.51726.31392.41519.1167129.630.75081676.51540.47633.1618.65943011.219.349813153.223185.6412.01312328933.6524.7921.03015.997993.1877834.391234570512952461.4842.55490300.481295970.28138939958.5468159.213.601944351871.121422505.510.593299184.2496.90772517.2696.99751.2014634602827.772253.14110.63499241972765.93210.19437.51109.3620684.29227.933.365351888.2110603.844.52937.9921926.151925.68315.33181305.7137662.258557.713.9312168.05826.67818.6037639757070.5258.05821.93148047.13139489143132.38380.447821058.5745.310.6761240.995710.275917413.3415077.6934917.392067499.5819.04516139130.9728630.5451189773.264.92517750.478609357.682.7023565.494549313.420.39437765.5946.34173620.310.6767984394194.4919866440.8617.05315228320.077937.355135.890746.5770.141814.24866.1057163.711114.201170.3828178.78218.28249.841998541428220.488448.772192.25524.75416006495.8510.385296.1737530.4715.380.5357.3312351005.82604.610.407243.3657.44135.8288.0958.3433.987929.41631046.6140.8931.927731.3146394852971.8988971.667773.57303.4378.7276.140.96189.40545.277342.55216.9279.6389.169.25860.8102.099913.979.82303.254973.340719.2144.4661116.96898747.831475.21732.313891516.2166429.5430.57841683.71540.97273.1911.75733349.368.260783171.153172.1912.157112728401.4524.7120.972667990.4791735.751231916197951700.1740.56489998.751300693.93137579874.564680063.505634351920.421425621.440.602831182.897.89082516.0996.71691.233436111781.822255.4220.62713341965286.06209.24437.68109.2320679.68228.942.974271957.7710587.34.52931.82231883.761912.74316.03281301.1811660.213257.623.9312172.92828.01816.7562640853365.5458822.5149647.68130980973116.09090.4575541057.7445.340.6721150.9750910.247825394.4913370.3334685.852077119.7619.07717529040.9739840.630639767.994.912527084.8712418451.712.6730366.084569476.860.38922365.4946.25172228.340.6803954403267.4219842969.2817.07915403597.247817.064141.470546.2175.52413.23317.149139.824115.12666.083218.91224.31256.242051441404721.359346.7815181.83515.11415428500.1183407519.3656106.6518527.8015.110.5358.7612381024.82528.710.373270.7657.52135.56256.4558.6632.175831.07271059136.4230.249433.0514404463974.985966.760673.24303.6375.578.850.97183.9495.434142.47987.158088.919.26879.6102.764613.689.81301.919273.619619.6144.9144117.22198864.781479.11731.21391.71513.51664.229.6630.72471677.61543.6OpenBenchmarking.org

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MMAPabcno smt ano smt bsmt asmt bsmt csmt d2K4K6K8K10K1663.081664.751668.924520.483591.718360.769017.127633.167273.191. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d5101520258.745769.195703.830176.192577.9886115.4210019.5490018.6594011.75730MIN: 3.65MIN: 4.13MIN: 2.72MIN: 3.65MIN: 3.88MIN: 10.4MIN: 10.67MIN: 11.23MIN: 8.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d7001400210028003500674.09671.65669.15930.26901.043187.003034.623011.213349.36MIN: 667.01MIN: 664.19MIN: 661.71MIN: 898.89MIN: 864.28MIN: 3034.82MIN: 2821.2MIN: 2853.26MIN: 3325.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d36912152.015811.923401.886862.040031.931267.605667.422709.349818.26078MIN: 1.81MIN: 1.72MIN: 1.68MIN: 1.78MIN: 1.77MIN: 6.54MIN: 6.44MIN: 7.73MIN: 7.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d7001400210028003500671.69670.20671.23913.20903.363269.053249.283153.223171.15MIN: 664.46MIN: 663.54MIN: 662.9MIN: 884.92MIN: 874.93MIN: 3243.11MIN: 3226.78MIN: 3059.88MIN: 3148.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d7001400210028003500675.92668.95667.95973.32939.553133.653222.933185.643172.19MIN: 668.99MIN: 662.29MIN: 660.93MIN: 937.3MIN: 904.78MIN: 3037.06MIN: 2927.41MIN: 2949.58MIN: 3155.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d36912153.114974.278584.397204.878834.7930211.432607.3014712.0130012.15710MIN: 2.47MIN: 3.37MIN: 3.21MIN: 3.53MIN: 3.4MIN: 7.59MIN: 5.77MIN: 8.02MIN: 8.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Context Switchingabcno smt ano smt bsmt asmt bsmt csmt d10M20M30M40M50M18941003.9716313126.8616862185.5447222624.4944683906.6812895047.7312221058.3412328933.6512728401.451. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: NUMAabcno smt ano smt bsmt asmt bsmt csmt d110220330440550483.90498.67478.0520.5119.7824.8224.7724.7924.711. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d5101520257.357467.282437.381169.252929.3175120.2695020.5720021.0301020.97260MIN: 4.85MIN: 6.58MIN: 6.77MIN: 7.79MIN: 8.07MIN: 17.68MIN: 18.25MIN: 17.96MIN: 18.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d4812162016.3015.5415.106.016.016.026.035.996.00MIN: 7.91 / MAX: 51.62MIN: 8.28 / MAX: 57.72MIN: 6.94 / MAX: 60.59MIN: 5.2 / MAX: 37.8MIN: 5.02 / MAX: 36.88MIN: 5.27 / MAX: 25.12MIN: 5.17 / MAX: 38.35MIN: 5.13 / MAX: 31.29MIN: 5.21 / MAX: 25.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2K4K6K8K10K2941.473085.533174.047979.057979.827953.217949.057993.187990.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pthreadabcno smt ano smt bsmt asmt bsmt csmt d40K80K120K160K200K109397.78109356.78109609.1568076.0367451.3074978.51180407.5977834.3991735.751. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Readabcno smt ano smt bsmt asmt bsmt csmt d300M600M900M1200M1500M4682314344668888884680697921209611055121354029912256628521231168093123457051212319161971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Matrix Mathabcno smt ano smt bsmt asmt bsmt csmt d200K400K600K800K1000K382305.69382304.04382328.50932248.18925984.24946032.88946660.88952461.48951700.171. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Cacheabcno smt ano smt bsmt asmt bsmt csmt d2040608010077.5167.2197.0440.9455.8447.5844.3842.5540.561. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Stressabcno smt ano smt bsmt asmt bsmt csmt d110K220K330K440K550K205134.87217072.27217304.19328297.04326819.72487359.39487987.32490300.48489998.751. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Vector Mathabcno smt ano smt bsmt asmt bsmt csmt d300K600K900K1200K1500K556875.59556797.27556833.50920216.27920642.341291689.331291704.571295970.281300693.931. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mutexabcno smt ano smt bsmt asmt bsmt csmt d30M60M90M120M150M59479783.6460031543.9759929401.4063581395.6765500297.73136339636.91135855921.28138939958.50137579874.561. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Cryptoabcno smt ano smt bsmt asmt bsmt csmt d100K200K300K400K500K203073.15203147.15203095.44435615.35437065.75466292.12466609.22468159.21468006.001. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.81041.62082.43123.24164.0521.564611.562571.567422.196452.123793.391513.494533.601943.50563MIN: 1.41MIN: 1.41MIN: 1.39MIN: 2MIN: 1.9MIN: 2.93MIN: 3.11MIN: 3.08MIN: 3.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: SENDFILEabcno smt ano smt bsmt asmt bsmt csmt d900K1800K2700K3600K4500K1950323.961913590.771891202.373284433.203282827.504329963.264329824.384351871.124351920.421. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Function Callabcno smt ano smt bsmt asmt bsmt csmt d300K600K900K1200K1500K621015.34621003.92621041.93829106.63829445.381414423.651413555.441422505.511425621.441. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.13560.27120.40680.54240.6780.2639970.2633730.2629200.3166650.2935640.5917380.6001990.5932990.602831MIN: 0.18MIN: 0.2MIN: 0.2MIN: 0.24MIN: 0.22MIN: 0.43MIN: 0.38MIN: 0.39MIN: 0.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Atomicabcno smt ano smt bsmt asmt bsmt csmt d90180270360450174.72223.29183.33400.03395.95184.34186.64184.24182.801. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d2040608010042.9642.8342.9297.4097.3497.2997.3596.9197.89

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc Qsort Data Sortingabcno smt ano smt bsmt asmt bsmt csmt d60012001800240030001122.391132.351125.861978.801963.752564.482520.382517.262516.091. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d2040608010042.9942.9042.7497.4497.3897.0296.9197.0096.72

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.27750.5550.83251.111.38750.5552770.5519660.5568510.6643590.6705741.2004901.1813501.2014601.233400MIN: 0.53MIN: 0.49MIN: 0.53MIN: 0.56MIN: 0.55MIN: 1.04MIN: 1.08MIN: 1.08MIN: 1.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc C String Functionsabcno smt ano smt bsmt asmt bsmt csmt d8M16M24M32M40M16257100.2516168655.1716537965.1426755146.6928009942.5635687958.3735996976.1334602827.7736111781.821. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d50010001500200025001009.851012.351013.742204.752203.162230.312245.802253.142255.42

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.14290.28580.42870.57160.71450.2902000.2913870.2854320.3167790.3404980.6217760.6209070.6349920.627133MIN: 0.25MIN: 0.23MIN: 0.24MIN: 0.25MIN: 0.28MIN: 0.45MIN: 0.41MIN: 0.55MIN: 0.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Hashabcno smt ano smt bsmt asmt bsmt csmt d9M18M27M36M45M18954936.0018955118.1018961773.1827408413.1027422305.5341966139.6741989522.6841972765.9341965286.061. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d5010015020025095.7695.4895.59208.82210.14208.13208.19210.19209.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2004006008001000962.10962.63962.15438.29438.99439.41439.50437.51437.68MIN: 879.24 / MAX: 1018.81MIN: 893.43 / MAX: 1015.71MIN: 888.7 / MAX: 1017.92MIN: 416.93 / MAX: 496.86MIN: 427.57 / MAX: 484.31MIN: 410.81 / MAX: 465.4MIN: 424.22 / MAX: 477.66MIN: 400.05 / MAX: 473.91MIN: 394.89 / MAX: 478.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2040608010049.7249.7249.77109.08109.04108.93108.89109.36109.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d4K8K12K16K20K9494.229496.959485.6520836.1020849.7520610.2320610.6220684.2920679.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d110220330440550499.32500.83500.30229.46228.01230.21230.06227.93228.94MIN: 264.54 / MAX: 537.26MIN: 418.76 / MAX: 546.62MIN: 410.04 / MAX: 531.93MIN: 217.93 / MAX: 265.81MIN: 212.99 / MAX: 267.17MIN: 211.56 / MAX: 253.58MIN: 210.58 / MAX: 251.42MIN: 210.46 / MAX: 252.96MIN: 214.92 / MAX: 248.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.80921.61842.42763.23684.0461.877771.729751.638602.022862.028333.388333.596433.365352.97427MIN: 1.23MIN: 1.27MIN: 1.2MIN: 1.56MIN: 1.43MIN: 2.76MIN: 2.7MIN: 2.7MIN: 2.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d400800120016002000907.49903.27906.141148.261124.751912.161890.281888.211957.77MIN: 897.95MIN: 894.76MIN: 898.41MIN: 1108.42MIN: 1091.36MIN: 1891.03MIN: 1864.29MIN: 1860.29MIN: 1929.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2K4K6K8K10K4914.174915.424910.1310556.9310549.2410544.0310549.1510603.8410587.301. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d36912159.769.759.764.544.544.544.544.524.52MIN: 5.03 / MAX: 28.1MIN: 5.25 / MAX: 35.53MIN: 4.98 / MAX: 36.12MIN: 4.12 / MAX: 27.91MIN: 4.09 / MAX: 55.78MIN: 4.07 / MAX: 35.16MIN: 4.11 / MAX: 42.14MIN: 4.12 / MAX: 33.17MIN: 4.03 / MAX: 45.521. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d2004006008001000440.85439.04438.44941.24938.91929.02927.68937.99931.82

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d400800120016002000911.25912.11915.081151.251119.171830.411937.651926.151883.76MIN: 903.01MIN: 902.43MIN: 906.26MIN: 1070.35MIN: 1086.08MIN: 1807.97MIN: 1908.65MIN: 1901.09MIN: 1850.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d400800120016002000909.69917.08911.761147.341139.631888.101894.071925.681912.74MIN: 901.1MIN: 909.14MIN: 901.88MIN: 1109.85MIN: 1100.83MIN: 1865.76MIN: 1870.66MIN: 1901.2MIN: 1890.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d70140210280350149.74149.82149.69308.43308.30314.62314.25315.33316.03

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d30060090012001500619.75619.94625.551291.971292.801306.671306.581305.711301.18

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d140280420560700314.84316.29316.63649.30647.76658.21658.34662.26660.21

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d132639526527.9728.2327.8957.1857.1957.2957.3157.7157.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2468108.118.138.133.953.953.953.963.933.93MIN: 5.39 / MAX: 69.87MIN: 5.35 / MAX: 55.84MIN: 3.83 / MAX: 59.87MIN: 3.61 / MAX: 38MIN: 3.68 / MAX: 42.53MIN: 3.61 / MAX: 34.38MIN: 3.66 / MAX: 32.83MIN: 3.61 / MAX: 42.82MIN: 3.61 / MAX: 23.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d3K6K9K12K15K5908.895894.275898.7912119.6612128.3812117.1612109.6412168.0512172.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d4008001200160020001701.701685.261704.02833.99834.17833.15832.70826.67828.01MIN: 1395.71 / MAX: 2063.97MIN: 891.16 / MAX: 1979.37MIN: 828.99 / MAX: 1969.02MIN: 723.91 / MAX: 1011.94MIN: 732.01 / MAX: 1006.46MIN: 725.84 / MAX: 1017.38MIN: 723.65 / MAX: 1031.69MIN: 724.1 / MAX: 1006.75MIN: 716.19 / MAX: 1018.341. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d2004006008001000401.78401.21400.04813.98817.63817.75822.16818.60816.76

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mallocabcno smt ano smt bsmt asmt bsmt csmt d140M280M420M560M700M312709034.05314418461.02313768771.26456657338.84456651508.04634718750.41634995429.66639757070.52640853365.541. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d132639526528.4728.4228.3357.5857.4957.7257.6758.0558.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d4008001200160020001671.461675.961679.03828.67830.04827.07827.03821.93822.50MIN: 924.15 / MAX: 1977.46MIN: 1231.52 / MAX: 1967.75MIN: 865.58 / MAX: 1995.12MIN: 730.29 / MAX: 1036.1MIN: 722.6 / MAX: 1015.67MIN: 724.84 / MAX: 1037.91MIN: 723.77 / MAX: 1003.25MIN: 725.25 / MAX: 1010.56MIN: 717.65 / MAX: 997.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d30K60K90K120K150K74495.1774353.4174486.08127833.52127770.39148316.88147582.87148047.13149647.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While Writingabcno smt ano smt bsmt asmt bsmt csmt d3M6M9M12M15M92961858620352831637976438317913568153172501383068913948914130980971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d70014002100280035001566.561573.751571.833107.303086.363114.683114.433132.383116.09

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.13020.26040.39060.52080.6510.4154460.5784660.4171880.2916870.2912290.4514720.4502610.4478200.457554MIN: 0.4MIN: 0.4MIN: 0.4MIN: 0.27MIN: 0.27MIN: 0.4MIN: 0.37MIN: 0.37MIN: 0.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2004006008001000540.41541.03535.781045.811048.741049.161050.371058.571057.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2040608010088.7188.6189.4645.8645.7345.7145.6645.3145.34MIN: 44.2 / MAX: 132.86MIN: 47.57 / MAX: 124.67MIN: 42.16 / MAX: 123.4MIN: 39.29 / MAX: 91.13MIN: 39.68 / MAX: 86.83MIN: 39.78 / MAX: 73.33MIN: 38.82 / MAX: 75.65MIN: 39.35 / MAX: 73.19MIN: 39.4 / MAX: 73.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.15210.30420.45630.60840.76050.3570170.3568750.3531750.3428360.3472310.6744550.6756640.6761240.672115MIN: 0.31MIN: 0.31MIN: 0.31MIN: 0.28MIN: 0.3MIN: 0.49MIN: 0.55MIN: 0.49MIN: 0.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.2240.4480.6720.8961.120.5344500.5130680.5469910.5560690.6290260.9753570.9717040.9957100.975091MIN: 0.49MIN: 0.47MIN: 0.5MIN: 0.46MIN: 0.51MIN: 0.92MIN: 0.82MIN: 0.87MIN: 0.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.06880.13760.20640.27520.3440.3049720.3057310.3055990.1603610.1648450.2796380.2910870.2759170.247825MIN: 0.28MIN: 0.28MIN: 0.28MIN: 0.15MIN: 0.15MIN: 0.23MIN: 0.23MIN: 0.23MIN: 0.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MEMFDabcno smt ano smt bsmt asmt bsmt csmt d120240360480600518.46507.74507.97303.56308.67464.70564.05413.34394.491. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Memory Copyingabcno smt ano smt bsmt asmt bsmt csmt d4K8K12K16K20K20106.5120340.4020297.9111342.6910949.9015914.1015430.1815077.6913370.331. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Forkingabcno smt ano smt bsmt asmt bsmt csmt d14K28K42K56K70K58156.3658664.9764299.6243094.9645685.2836266.0236020.1634917.3934685.851. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Futexabcno smt ano smt bsmt asmt bsmt csmt d800K1600K2400K3200K4000K2794694.372805836.522794473.753746361.523802292.602333781.392396325.932067499.582077119.761. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareabcno smt ano smt bsmt asmt bsmt csmt d51015202510.5710.6110.5919.1818.8418.8219.0219.0519.081. (CXX) g++ options: -O3

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write Randomabcno smt ano smt bsmt asmt bsmt csmt d600K1200K1800K2400K3000K2926458289196229100232079804206383117876821761416161391317529041. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.26330.52660.78991.05321.31651.1700401.1669601.1691900.6642950.6513010.9785900.9735550.9728630.973984MIN: 1.07MIN: 1.07MIN: 1.07MIN: 0.63MIN: 0.62MIN: 0.93MIN: 0.92MIN: 0.92MIN: 0.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.1560.3120.4680.6240.780.6933670.6890640.6926670.4035900.4003250.5385390.5517830.5451180.630630MIN: 0.64MIN: 0.64MIN: 0.65MIN: 0.38MIN: 0.38MIN: 0.48MIN: 0.49MIN: 0.45MIN: 0.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2K4K6K8K10K5750.745720.905724.819784.099758.749746.089680.449773.269767.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d2468108.348.388.374.904.914.924.954.904.91MIN: 6.61 / MAX: 55.54MIN: 6.4 / MAX: 32.33MIN: 6.83 / MAX: 50.24MIN: 4.46 / MAX: 56.52MIN: 4.49 / MAX: 34.72MIN: 4.52 / MAX: 34.8MIN: 4.54 / MAX: 24.23MIN: 4.5 / MAX: 29.95MIN: 4.51 / MAX: 27.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:5abcno smt ano smt bsmt asmt bsmt csmt d900K1800K2700K3600K4500K4143044.214162812.474155273.462444886.822460416.332520253.002507331.292517750.472527084.871. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: System V Message Passingabcno smt ano smt bsmt asmt bsmt csmt d3M6M9M12M15M10473084.0910471889.9710475486.397402514.747372780.7210103952.138586858.518609357.6812418451.711. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.62131.24261.86392.48523.10651.831471.841721.822641.650411.700572.708072.761402.702352.67303MIN: 1.73MIN: 1.76MIN: 1.71MIN: 1.41MIN: 1.5MIN: 2.09MIN: 2.46MIN: 2.27MIN: 2.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Slowabcno smt ano smt bsmt asmt bsmt csmt d153045607540.6340.8640.7647.1347.4364.6366.2265.4966.081. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10abcno smt ano smt bsmt asmt bsmt csmt d1.5M3M4.5M6M7.5M6861088.896792746.346839975.874244908.834220522.494530948.624392934.754549313.424569476.861. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.09180.18360.27540.36720.4590.3114050.3127210.3122890.2548450.2557960.4074300.4080740.3943770.389223MIN: 0.3MIN: 0.28MIN: 0.28MIN: 0.18MIN: 0.18MIN: 0.27MIN: 0.27MIN: 0.29MIN: 0.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Mediumabcno smt ano smt bsmt asmt bsmt csmt d153045607541.4041.3941.4747.9747.9365.5665.6965.5965.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Slowabcno smt ano smt bsmt asmt bsmt csmt d112233445529.3729.2929.4134.5934.6845.5646.2546.3446.25

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d40K80K120K160K200K112346.36111378.39112186.25160545.41162294.47173926.92175754.35173620.31172228.341. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.16030.32060.48090.64120.80150.7115880.7124950.7085630.4623860.4611740.6725850.6727680.6767980.680395MIN: 0.68MIN: 0.68MIN: 0.67MIN: 0.42MIN: 0.43MIN: 0.52MIN: 0.53MIN: 0.53MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100abcno smt ano smt bsmt asmt bsmt csmt d1000K2000K3000K4000K5000K4878860.004876951.364852421.673192061.683216133.914383314.654357453.764394194.494403267.421. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Semaphoresabcno smt ano smt bsmt asmt bsmt csmt d4M8M12M16M20M18128283.2918100474.3618088584.6713141129.6813192391.8520047519.0519927313.5219866440.8619842969.281. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigabcno smt ano smt bsmt asmt bsmt csmt d61218243025.7725.7525.6817.6117.5917.3817.0217.0517.08

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pollabcno smt ano smt bsmt asmt bsmt csmt d3M6M9M12M15M12653709.6412661687.4612676101.4110458275.2410393402.0115359471.7315341564.1615228320.0715403597.241. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark Scalarabcno smt ano smt bsmt asmt bsmt csmt d2004006008001000556557549647652764810793781MIN: 61 / MAX: 5994MIN: 61 / MAX: 6019MIN: 60 / MAX: 5475MIN: 101 / MAX: 4024MIN: 102 / MAX: 3990MIN: 139 / MAX: 3808MIN: 138 / MAX: 3650MIN: 139 / MAX: 3583MIN: 139 / MAX: 3776

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d2468105.18125.16335.16967.58566.41747.52256.88447.35507.0640

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d4080120160200192.88193.53193.31131.76155.74132.87145.17135.89141.47

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Mediumabcno smt ano smt bsmt asmt bsmt csmt d112233445533.0333.1333.1038.3838.6547.5646.6246.5746.21

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d2040608010099.1598.6298.5972.1371.8977.8877.4170.1475.52

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d4812162010.0810.1410.1413.8613.9012.8312.9114.2513.23

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d2468105.30185.49805.50076.01186.06507.18657.30546.10577.1490

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d4080120160200188.54181.81181.72166.27164.81139.10136.83163.71139.82

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d4812162011.3611.1911.1614.4614.4213.6414.6414.2015.13

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d2040608010087.9689.3489.5569.1169.3373.2968.2970.3866.08

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Super Fastabcno smt ano smt bsmt asmt bsmt csmt d50100150200250238.68239.92237.96181.67216.15178.49220.79178.78218.91

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fastabcno smt ano smt bsmt asmt bsmt csmt d50100150200250240.98240.91238.77186.24209.85216.15179.56218.28224.31

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedabcno smt ano smt bsmt asmt bsmt csmt d70140210280350330.8332.0330.1279.9278.2254.1259.1249.8256.21. (CC) gcc options: -O3 -pthread -lz

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update Randomabcno smt ano smt bsmt asmt bsmt csmt d120K240K360K480K600K5443845455565435724620184522284131994115134199854205141. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential Fillabcno smt ano smt bsmt asmt bsmt csmt d120K240K360K480K600K5422565453965445654657004640444141684137084142824140471. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d51015202516.5816.6716.5220.7420.4221.0321.7420.4921.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d142842567060.2559.9260.4948.1848.9347.5145.9648.7746.78

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Very Fastabcno smt ano smt bsmt asmt bsmt csmt d50100150200250234.95234.68237.44196.25183.75183.99209.55192.25181.83

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runabcno smt ano smt bsmt asmt bsmt csmt d140280420560700625.91627.20621.88666.43649.52524.04527.88524.75515.11MIN: 56.13 / MAX: 7500MIN: 58.54 / MAX: 6000MIN: 58.54 / MAX: 6666.67MIN: 85.11 / MAX: 6666.67MIN: 86.46 / MAX: 6000MIN: 90.5 / MAX: 6666.67MIN: 90.63 / MAX: 6666.67MIN: 90.23 / MAX: 6000MIN: 88.5 / MAX: 6000

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fillabcno smt ano smt bsmt asmt bsmt csmt d110K220K330K440K550K5339275346815365514786294682104230274178824160064154281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cacheabcno smt ano smt bsmt asmt bsmt csmt d140280420560700600.70610.79614.24635.48622.73500.13516.09495.85500.12MIN: 57.75 / MAX: 6000MIN: 58.14 / MAX: 6666.67MIN: 57.14 / MAX: 6666.67MIN: 85.35 / MAX: 6000MIN: 83.22 / MAX: 5454.55MIN: 65.15 / MAX: 6000MIN: 89.55 / MAX: 6000MIN: 51.06 / MAX: 6000MIN: 69.61 / MAX: 6000

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d36912159.59629.57938.80049.88999.93958.19148.350610.38529.3656

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d306090120150104.12104.29113.51101.01100.51121.92119.6096.17106.65

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runabcno smt ano smt bsmt asmt bsmt csmt d140280420560700625.16628.37636.18665.00662.01534.82538.63530.47527.80MIN: 57.69 / MAX: 6000MIN: 58.37 / MAX: 6666.67MIN: 58.03 / MAX: 6666.67MIN: 85.84 / MAX: 6000MIN: 86.33 / MAX: 6000MIN: 91.32 / MAX: 6000MIN: 89.82 / MAX: 6000MIN: 92.02 / MAX: 5454.55MIN: 81.52 / MAX: 6000

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4Kabcno smt ano smt bsmt asmt bsmt csmt d4812162017.3617.3717.4614.4814.5013.9014.2715.3815.111. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.14850.2970.44550.5940.74250.650.660.660.570.580.530.530.530.53MIN: 0.32 / MAX: 20.68MIN: 0.32 / MAX: 19.99MIN: 0.31 / MAX: 22.67MIN: 0.5 / MAX: 13.02MIN: 0.5 / MAX: 12.8MIN: 0.5 / MAX: 24.2MIN: 0.5 / MAX: 9.61MIN: 0.5 / MAX: 9.15MIN: 0.5 / MAX: 8.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra Fastabcno smt ano smt bsmt asmt bsmt csmt d163248648070.5670.6871.1357.7857.2357.9657.5157.3358.76

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCabcno smt ano smt bsmt asmt bsmt csmt d30060090012001500108910751066109811081318124312351238MIN: 179 / MAX: 7194MIN: 179 / MAX: 6442MIN: 180 / MAX: 6427MIN: 297 / MAX: 4284MIN: 297 / MAX: 4208MIN: 391 / MAX: 3855MIN: 390 / MAX: 3501MIN: 392 / MAX: 4332MIN: 392 / MAX: 3738

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedabcno smt ano smt bsmt asmt bsmt csmt d300600900120015001233.81239.31241.11227.51234.21023.91122.41005.81024.81. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedabcno smt ano smt bsmt asmt bsmt csmt d70014002100280035003033.93095.03049.42804.22865.92795.12513.42604.62528.71. (CC) gcc options: -O3 -pthread -lz

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compileabcno smt ano smt bsmt asmt bsmt csmt d369121512.5712.4312.4710.6010.6610.3810.2610.4110.37

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very Fastabcno smt ano smt bsmt asmt bsmt csmt d60120180240300296.93290.59291.03269.62268.77250.71274.63243.36270.761. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very Fastabcno smt ano smt bsmt asmt bsmt csmt d153045607568.8268.7969.0457.1358.0959.3057.8657.4457.52

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Slowabcno smt ano smt bsmt asmt bsmt csmt d4080120160200139.92139.07140.10155.56159.79132.67136.33135.80135.561. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super Fastabcno smt ano smt bsmt asmt bsmt csmt d70140210280350307.49303.99301.40288.41280.18296.72267.04288.09256.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super Fastabcno smt ano smt bsmt asmt bsmt csmt d163248648069.3369.0069.9259.8358.3359.0659.2058.3458.66

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d81624324028.6228.7128.5731.9232.1832.3032.2433.9932.18

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d81624324034.9334.8234.9931.3231.0730.9631.0129.4231.07

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedabcno smt ano smt bsmt asmt bsmt csmt d2004006008001000892.7909.4916.9955.61032.31051.41038.91046.61059.01. (CC) gcc options: -O3 -pthread -lz

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Mediumabcno smt ano smt bsmt asmt bsmt csmt d4080120160200143.31144.21143.81159.63161.40140.78138.33140.89136.421. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d81624324034.8335.1035.1231.9129.7531.7631.7831.9330.25

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d81624324028.7128.4928.4731.3333.6031.4831.4631.3133.05

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill Syncabcno smt ano smt bsmt asmt bsmt csmt d90K180K270K360K450K3766013730023569223506733440403880524012953948524044631. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d20040060080010001115.761118.441122.06955.16955.24972.52975.50971.90974.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d20040060080010001116.431116.291117.73956.01955.34970.22969.21971.67966.76

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very Fastabcno smt ano smt bsmt asmt bsmt csmt d2040608010080.9081.6180.3170.0269.9475.9174.1173.5773.241. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra Fastabcno smt ano smt bsmt asmt bsmt csmt d70140210280350310.73305.52309.64278.62271.45302.55295.18303.43303.631. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super Fastabcno smt ano smt bsmt asmt bsmt csmt d2040608010081.9080.6884.5575.5074.0376.6476.7078.7275.501. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra Fastabcno smt ano smt bsmt asmt bsmt csmt d2040608010083.4584.7782.7076.0574.3777.9677.0076.1478.851. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d0.24530.49060.73590.98121.22651.091.091.091.011.010.970.970.960.97MIN: 0.49 / MAX: 19.01MIN: 0.48 / MAX: 25.82MIN: 0.49 / MAX: 22.06MIN: 0.86 / MAX: 8.97MIN: 0.86 / MAX: 8.72MIN: 0.87 / MAX: 9.78MIN: 0.87 / MAX: 10.96MIN: 0.87 / MAX: 9.72MIN: 0.86 / MAX: 13.311. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d4080120160200197.10196.54197.66191.26175.32188.58188.41189.41183.95

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabcno smt ano smt bsmt asmt bsmt csmt d1.28282.56563.84845.13126.4145.07155.08595.05735.22625.70155.30045.30535.27735.4341

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d112233445547.4647.3547.2942.8142.8442.9542.6542.5542.48

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4Kabcno smt ano smt bsmt asmt bsmt csmt d2468107.687.717.707.047.167.246.976.927.151. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Slowabcno smt ano smt bsmt asmt bsmt csmt d2040608010081.1681.4181.1087.1588.1380.1779.9579.6380.00

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 1080p - Video Preset: Mediumabcno smt ano smt bsmt asmt bsmt csmt d2040608010091.3791.4791.2998.2196.6789.4089.7089.1688.91

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUabcno smt ano smt bsmt asmt bsmt csmt d369121510.1010.1010.119.189.189.289.289.259.26MIN: 5.46 / MAX: 35.41MIN: 5.47 / MAX: 28.16MIN: 5.43 / MAX: 36.27MIN: 8.17 / MAX: 31.1MIN: 8.21 / MAX: 24.31MIN: 8.1 / MAX: 22.87MIN: 8.13 / MAX: 24.28MIN: 8.12 / MAX: 19.04MIN: 8.12 / MAX: 30.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedabcno smt ano smt bsmt asmt bsmt csmt d2004006008001000938.5910.8926.9859.9892.0853.8852.7860.8879.61. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d20406080100108.68109.11109.25100.24100.42103.09103.19102.10102.76

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pabcno smt ano smt bsmt asmt bsmt csmt d4812162014.6914.6714.7614.2813.7214.0813.5713.9713.681. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedabcno smt ano smt bsmt asmt bsmt csmt d36912159.229.299.309.399.769.789.879.829.811. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d70140210280350319.98319.96320.05304.21303.74303.33304.08303.25301.92

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d2040608010077.3477.3176.6273.2073.2173.3173.3573.3473.62

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterabcno smt ano smt b71421283530.0429.8530.0828.6928.521. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 1080p - Video Preset: Faster

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedabcno smt ano smt bsmt asmt bsmt csmt d51015202519.119.119.119.819.818.819.519.219.61. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d306090120150152.14151.60151.39145.35145.36145.40145.27144.47144.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d306090120150119.21119.38119.70116.15115.64117.12116.48116.97117.22

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Socket Activityabcno smt ano smt bsmt asmt bsmt csmt d2K4K6K8K10K8873.588851.658876.108968.288924.108748.998750.618747.838864.781. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lrt -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedabcno smt ano smt bsmt asmt bsmt csmt d300600900120015001472.61467.81470.51483.11483.81495.21483.51475.21479.11. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedabcno smt ano smt bsmt asmt bsmt csmt d4008001200160020001704.81716.71715.61728.01727.81723.71726.31732.31731.21. (CC) gcc options: -O3 -pthread -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastabcno smt ano smt b1.32642.65283.97925.30566.6325.8205.8095.8085.8905.8951. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 4K - Video Preset: Fast

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedabcno smt ano smt bsmt asmt bsmt csmt d300600900120015001383.71378.21384.01395.81397.61393.51392.41389.01391.71. (CC) gcc options: -O3 -pthread -lz

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterabcno smt ano smt b369121512.3412.3912.4012.4812.311. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 4K - Video Preset: Faster

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedabcno smt ano smt bsmt asmt bsmt csmt d300600900120015001514.81516.61517.41500.41511.51515.61519.11516.21513.51. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedabcno smt ano smt bsmt asmt bsmt csmt d4008001200160020001669.31651.01661.91669.71667.61664.31671.01664.01664.21. (CC) gcc options: -O3 -pthread -lz

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pabcno smt ano smt bsmt asmt bsmt csmt d71421283529.3729.5029.4629.6229.7129.5429.6029.5429.661. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabcno smt ano smt bsmt asmt bsmt csmt d71421283530.5930.4630.4930.4230.6430.7530.7530.5830.72

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under The Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastabcno smt ano smt b369121512.4412.4012.4412.3612.371. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Video Input: Bosphorus 1080p - Video Preset: Fast

smt a: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt b: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt c: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

smt d: The test quit with a non-zero exit status. E: Parameter Check Error: Number of threads out of range (-1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedabcno smt ano smt bsmt asmt bsmt csmt d4008001200160020001677.91673.71675.41684.81682.81682.51676.51683.71677.61. (CC) gcc options: -O3 -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedabcno smt ano smt bsmt asmt bsmt csmt d300600900120015001536.61537.51540.91542.51538.81539.51540.41540.91543.61. (CC) gcc options: -O3 -pthread -lz

Stress-NG

Test: x86_64 RdRand

a: The test run did not produce a result. E: stress-ng: error: [1836509] No stress workers invoked (one or more were unsupported)

b: The test run did not produce a result. E: stress-ng: error: [3544257] No stress workers invoked (one or more were unsupported)

c: The test run did not produce a result. E: stress-ng: error: [1223131] No stress workers invoked (one or more were unsupported)

no smt a: The test run did not produce a result. E: stress-ng: error: [4177065] No stress workers invoked (one or more were unsupported)

no smt b: The test run did not produce a result. E: stress-ng: error: [19683] No stress workers invoked (one or more were unsupported)

smt a: The test run did not produce a result. E: stress-ng: error: [55524] No stress workers invoked (one or more were unsupported)

smt b: The test run did not produce a result. E: stress-ng: error: [3903578] No stress workers invoked (one or more were unsupported)

smt c: The test run did not produce a result. E: stress-ng: error: [4007271] No stress workers invoked (one or more were unsupported)

smt d: The test run did not produce a result. E: stress-ng: error: [381360] No stress workers invoked (one or more were unsupported)

Test: IO_uring

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

no smt a: The test run did not produce a result.

no smt b: The test run did not produce a result.

smt a: The test run did not produce a result.

smt b: The test run did not produce a result.

smt c: The test run did not produce a result.

smt d: The test run did not produce a result.

Test: Zlib

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

no smt a: The test run did not produce a result.

no smt b: The test run did not produce a result.

smt a: The test run did not produce a result.

smt b: The test run did not produce a result.

smt c: The test run did not produce a result.

smt d: The test run did not produce a result.

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

Build: allmodconfig

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

no smt a: The test quit with a non-zero exit status.

no smt b: The test quit with a non-zero exit status.

smt a: The test quit with a non-zero exit status.

smt b: The test quit with a non-zero exit status.

smt c: The test quit with a non-zero exit status.

smt d: The test quit with a non-zero exit status.

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer ISPC - Model: Asian Dragon

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Asian Dragon Obj

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Asian Dragon

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer ISPC - Model: Crown

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer_ispc: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

Binary: Pathtracer - Model: Crown

a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

no smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt a: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt b: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt c: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

smt d: The test quit with a non-zero exit status. E: ./embree-4.0.0.x86_64.linux/bin/embree_pathtracer: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

169 Results Shown

Stress-NG
oneDNN:
  IP Shapes 1D - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 1D - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 1D - u8s8f32 - CPU
Stress-NG:
  Context Switching
  NUMA
oneDNN
OpenVINO:
  Vehicle Detection FP16 - CPU:
    ms
    FPS
Stress-NG
RocksDB
Stress-NG:
  Matrix Math
  CPU Cache
  CPU Stress
  Vector Math
  Mutex
  Crypto
oneDNN
Stress-NG:
  SENDFILE
  Function Call
oneDNN
Stress-NG
Neural Magic DeepSparse
Stress-NG
Neural Magic DeepSparse
oneDNN
Stress-NG
Neural Magic DeepSparse
oneDNN
Stress-NG
OpenVINO:
  Face Detection FP16-INT8 - CPU
  Face Detection FP16 - CPU
  Face Detection FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
  Face Detection FP16-INT8 - CPU
oneDNN:
  IP Shapes 3D - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
Neural Magic DeepSparse
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
OpenVINO:
  Person Detection FP32 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Vehicle Detection FP16-INT8 - CPU
  Person Detection FP32 - CPU
Neural Magic DeepSparse
Stress-NG
OpenVINO:
  Person Detection FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
RocksDB
Neural Magic DeepSparse
oneDNN
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
Stress-NG:
  MEMFD
  Memory Copying
  Forking
  Futex
GROMACS
RocksDB
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
Memcached
Stress-NG
oneDNN
Kvazaar
Memcached
oneDNN
Kvazaar
uvg266
OpenVINO
oneDNN
Memcached
Stress-NG
Timed Linux Kernel Compilation
Stress-NG
OpenVKL
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266:
  Bosphorus 1080p - Super Fast
  Bosphorus 1080p - Ultra Fast
Zstd Compression
RocksDB:
  Update Rand
  Seq Fill
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
uvg266
ClickHouse
RocksDB
ClickHouse
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
ClickHouse
VP9 libvpx Encoding
OpenVINO
uvg266
OpenVKL
Zstd Compression:
  8 - Compression Speed
  3 - Compression Speed
Timed FFmpeg Compilation
Kvazaar
uvg266
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Super Fast
uvg266
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Zstd Compression
Kvazaar
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
RocksDB
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Kvazaar:
  Bosphorus 4K - Very Fast
  Bosphorus 1080p - Ultra Fast
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
OpenVINO
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
VP9 libvpx Encoding
uvg266:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
OpenVINO
Zstd Compression
Neural Magic DeepSparse
VP9 libvpx Encoding
Zstd Compression
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
VVenC
Zstd Compression
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
Stress-NG
Zstd Compression:
  19 - Decompression Speed
  12 - Decompression Speed
VVenC
Zstd Compression
VVenC
Zstd Compression:
  3 - Decompression Speed
  8 - Decompression Speed
VP9 libvpx Encoding
Neural Magic DeepSparse
VVenC
Zstd Compression:
  8, Long Mode - Decompression Speed
  3, Long Mode - Decompression Speed