xeon eo march

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2304015-NE-XEONEOMAR92
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 3 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 12 Tests
CPU Massive 14 Tests
Creator Workloads 11 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 6 Tests
Game Development 3 Tests
HPC - High Performance Computing 6 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 4 Tests
Multi-Core 18 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 6 Tests
Server 6 Tests
Server CPU Tests 10 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 31 2023
  8 Hours, 52 Minutes
b
March 31 2023
  7 Hours, 24 Minutes
c
April 01 2023
  6 Hours, 16 Minutes
Invert Hiding All Results Option
  7 Hours, 31 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xeon eo marchOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Ice Lake IEH512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 22.106.2.0-rc5-phx-dodt (x86_64)GNOME Shell 43.0X Server 1.21.1.31.3.224GCC 12.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionXeon Eo March BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000375 - Python 3.10.7- dodt: Mitigation of DOITM + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%149%197%246%John The RipperStress-NGMemcachedGoogle DracoZstd CompressionDarmstadt Automotive Parallel Heterogeneous SuiteBuild2Timed Node.js CompilationSPECFEM3DTimed Godot Game Engine CompilationRocksDBVP9 libvpx Encodingdav1dVVenCnginxONNX RuntimeGROMACSEmbreeOpenSSLAOM AV1SVT-AV1Timed LLVM CompilationTensorFlowTimed FFmpeg CompilationFFmpegMariaDBBlender

xeon eo marchjohn-the-ripper: HMAC-SHA512stress-ng: System V Message Passingjohn-the-ripper: MD5stress-ng: Atomicjohn-the-ripper: Blowfishjohn-the-ripper: WPA PSKstress-ng: Socket Activityjohn-the-ripper: bcryptcompress-zstd: 8 - Decompression Speedopencv: Features 2Dstress-ng: MEMFDmemcached: 1:100memcached: 1:10opencv: Image Processingcompress-zstd: 3, Long Mode - Compression Speedonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardmemcached: 1:5stress-ng: CPU Cachestress-ng: Zlibdaphne: OpenMP - NDT Mappingopencv: Object Detectiononednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUcompress-zstd: 19, Long Mode - Compression Speeddaphne: OpenMP - Points2Imageopencv: Graph APIcompress-zstd: 12 - Compression Speedstress-ng: Futexonnx: ArcFace ResNet-100 - CPU - Standardopencv: Videoapache: 200compress-zstd: 3 - Compression Speedrocksdb: Read While Writingonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUopencv: DNN - Deep Neural Networkdaphne: OpenMP - Euclidean Clusteropencv: Coreonnx: fcn-resnet101-11 - CPU - Parallelcompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 8 - Compression Speedonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080ptensorflow: CPU - 16 - AlexNetaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kopenssl: SHA256onnx: super-resolution-10 - CPU - Parallelonnx: yolov4 - CPU - Standardstress-ng: Function Callrocksdb: Read Rand Write Randspecfem3d: Tomographic Modelaom-av1: Speed 10 Realtime - Bosphorus 4Kopencv: Stitchingonednn: Recurrent Neural Network Inference - u8s8f32 - CPUstress-ng: Glibc C String Functionstensorflow: CPU - 64 - AlexNetaom-av1: Speed 9 Realtime - Bosphorus 4Kspecfem3d: Water-layered Halfspacevpxenc: Speed 5 - Bosphorus 4Kstress-ng: CPU Stressstress-ng: Semaphoresaom-av1: Speed 6 Two-Pass - Bosphorus 1080pstress-ng: Pthreadaom-av1: Speed 10 Realtime - Bosphorus 1080ponnx: bertsquad-12 - CPU - Parallelaom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Kcompress-zstd: 8, Long Mode - Decompression Speedffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUvvenc: Bosphorus 1080p - Fasteronnx: CaffeNet 12-int8 - CPU - Parallelsvt-av1: Preset 12 - Bosphorus 4Kbuild2: Time To Compilestress-ng: Cryptobuild-nodejs: Time To Compileffmpeg: libx265 - Liveffmpeg: libx265 - Liveonnx: ArcFace ResNet-100 - CPU - Parallelsvt-av1: Preset 8 - Bosphorus 4Kffmpeg: libx264 - Liveffmpeg: libx264 - Liveonnx: bertsquad-12 - CPU - Standardvvenc: Bosphorus 4K - Fastcompress-zstd: 12 - Decompression Speedtensorflow: CPU - 32 - AlexNetcompress-zstd: 19 - Compression Speedspecfem3d: Mount St. Helenssvt-av1: Preset 4 - Bosphorus 4Konnx: GPT-2 - CPU - Standardstress-ng: Hashvpxenc: Speed 5 - Bosphorus 1080pstress-ng: SENDFILEcompress-zstd: 19 - Decompression Speedrocksdb: Rand Readopenssl: SHA512onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUrocksdb: Rand Fill Synconnx: GPT-2 - CPU - Parallelonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUstress-ng: Matrix Mathffmpeg: libx264 - Platformffmpeg: libx264 - Uploadffmpeg: libx264 - Platformaom-av1: Speed 4 Two-Pass - Bosphorus 4Konnx: ResNet50 v1-12-int8 - CPU - Standardbuild-godot: Time To Compileffmpeg: libx264 - Uploadonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelcompress-zstd: 3, Long Mode - Decompression Speedtensorflow: CPU - 16 - GoogLeNetonnx: ResNet50 v1-12-int8 - CPU - Parallelvvenc: Bosphorus 1080p - Fasttensorflow: CPU - 16 - ResNet-50nginx: 500dav1d: Summer Nature 4Kspecfem3d: Homogeneous Halfspaceaom-av1: Speed 8 Realtime - Bosphorus 1080ponnx: yolov4 - CPU - Parallelembree: Pathtracer ISPC - Asian Dragoncompress-zstd: 19, Long Mode - Decompression Speeddav1d: Chimera 1080pvpxenc: Speed 0 - Bosphorus 1080ptensorflow: CPU - 32 - GoogLeNetstress-ng: Forkingstress-ng: NUMAonednn: Recurrent Neural Network Inference - f32 - CPUtensorflow: CPU - 512 - AlexNetonnx: fcn-resnet101-11 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardffmpeg: libx265 - Uploadtensorflow: CPU - 256 - ResNet-50stress-ng: MMAPffmpeg: libx265 - Uploadonednn: Convolution Batch Shapes Auto - f32 - CPUapache: 500dav1d: Summer Nature 1080paom-av1: Speed 8 Realtime - Bosphorus 4Kdraco: Church Facadebuild-llvm: Unix Makefilesonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUblender: BMW27 - CPU-Onlymysqlslap: 1024mysqlslap: 2048openssl: AES-128-GCMmysqlslap: 4096embree: Pathtracer - Crownembree: Pathtracer ISPC - Crownaom-av1: Speed 4 Two-Pass - Bosphorus 1080pnginx: 200onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUdraco: Liongromacs: MPI CPU - water_GMX50_baretensorflow: CPU - 512 - ResNet-50tensorflow: CPU - 64 - ResNet-50vpxenc: Speed 0 - Bosphorus 4Kstress-ng: Pollblender: Fishy Cat - CPU-Onlyblender: Classroom - CPU-Onlyffmpeg: libx265 - Video On Demandtensorflow: CPU - 256 - AlexNetffmpeg: libx265 - Video On Demandembree: Pathtracer - Asian Dragon Objtensorflow: CPU - 32 - ResNet-50stress-ng: Mutexvvenc: Bosphorus 4K - Fasterbuild-llvm: Ninjastress-ng: Malloccompress-zstd: 3 - Decompression Speedblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyembree: Pathtracer ISPC - Asian Dragon Objdav1d: Chimera 1080p 10-bitsvt-av1: Preset 13 - Bosphorus 4Kspecfem3d: Layered Halfspaceonnx: super-resolution-10 - CPU - Standardtensorflow: CPU - 256 - GoogLeNetembree: Pathtracer - Asian Dragonffmpeg: libx265 - Platformffmpeg: libx265 - Platformopenssl: AES-256-GCMtensorflow: CPU - 64 - GoogLeNetbuild-ffmpeg: Time To Compilestress-ng: Memory Copyingtensorflow: CPU - 512 - GoogLeNetrocksdb: Rand Fillstress-ng: Glibc Qsort Data Sortingopenssl: RSA4096openssl: RSA4096stress-ng: IO_uringopenssl: ChaCha20-Poly1305stress-ng: Vector Mathrocksdb: Seq Fillrocksdb: Update Randopenssl: ChaCha20stress-ng: Context Switchingstress-ng: x86_64 RdRandmysqlslap: 512aom-av1: Speed 0 Two-Pass - Bosphorus 1080ponednn: Recurrent Neural Network Training - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - f32 - CPUonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Parallelnginx: 100abc15199300010295375.8110384000139.811449948332894302.891147771128.8270872766.91970590.492137824.9635698925826.54121774418.861441584.056374.22817.521194943.670567.889474.39671086161.4957096.236.196112117334680.572013.180599870.201235705.678115385842.32884253.1156295624711.3800.87533831.7138.0716.158.165761176685061.789111.3641429335.8392042615.29879416815.82508105453.91160467587.4236.0816.0431.4663358125.2152655.2911840389.3515.9787858.7130.0413.254928.780.331161.636.21209.2202855162.1012013.353624.463116.88593.004105894.93203.67533.78149.48092992332.679437.744133.0537.9616.12573.2611095.6178.5214.613.7432062832.375183.29512949136.2811.421143575.46907.727303909422617583540455.5957344092.4310.373237335705.12212.6216698579.9135.635.01219.832148.771254.89894777622.28891190.273.81190.2637.99823.33222945.5666.1718.0276431729.3910.1914103.136917.2134.665.71100.5765824.56437.26466.729518.739.37499699.9638.6260.213124.36292.8016574321.4195537442.696.815.396765286.2171.1719224.211011179920279006011571.265987.99678.82245504.323.9042755029.26871.3838.92.889551392.9832.2263.07448.20691755408.7216.9076.165930.8736280764.765.029180.771190168476.521128.4244.276.9289.1801130.6997.16129.635032517196.246214.384.3522448.63363874116.88709965487310132.4317.80810941.37257.441041591550.53369651189343.726355.62293957452870306730.85106061977724250916309703511158.35658591.721100.73708.53035.45992.3088718.0114630.2714322.568772.039961.4657137.673944.86315.0949916.18284.548125.2546627.624730.5988106.662320.9621.427191.5995662.010975.441287.99298.11835.4528110.810715298100010283641.1310295000163.2610738848232379477.341148251140.4356993583.831946110.381736416.84293481313.732.04742133650.031691720.85549.21771.261039864.164677.259912.90597049180.61018129.2433.678713310038092.88218575311970.21833656.795107827796.63831212.93086278.4603752.4250.92547831.1145.7915.538.45481097532058.827111.8969413303.5395934014.71413250216.43489265437.39559352709.91228.1616.5630.3729957155.04152455.9312234144.9115.5887532.430.2113.028527.930.341163.435.16215.442.1634613.1607.235119.69194.969108459.26202.82634.57146.06929025832.679638.178130.0938.8216.31453.2211075.5174.8114.313.4613124712.336186.94313204487.5111.531162795.2792426819500022641175270447.3637477191.76370.37954341137.35212.549.9735.644.93216.364148.85253.3221.94781172.372.81187.4238.01422.99226231.1965.2118.18230517629.8110.1411104.4793928.7133.015.66101.7966467.95432.4471.804513.229.47539707.3068.6660.093115.23291.5695067051.4056337076.9596.0715.366702286.2371.1610124.3611011280264643024011571.825488.5718.79244509.873.9343654799.19971.2138.632.99566416.7532.2962.71448.99407.5116.8776.663930.836505304.64.998181.169191206280.21132.3243.7177.2989.2864130.1297.1929.523601764197.018213.4984.2763448.2616.90712463205990132.2617.86510922.24258.171043561546.6936899.11186897.426364.63294463651570306299.59106147978614255505599103508321.29658461.441100.73737.56339.80332.211215.375870.66850610.05551.808951.658831.200245.56025.07516.99764.620935.3342929.68930.5984105.532341.1931.412121.6450361.293776.751884.050698.60545.3464810.88971929200073426960.152766000349.895151725582950201.5764015831.7583.21527158.772222180.6267.41820534.681657780.826511.98892.136.958791.23169.91056011.6437.05382209.88192793848.692.93257285.6638.230.01145.4516.337.995734807240011.6941431183.8395485214.95881809816.3358278938.78227.6416.6330.9582722435.02157803.1912232435.516.0790272.0530.9813.428628.610.341129.335.33214.3812.977624.35120.06492.475107612.44207.6134.24147.5131.934737.328130.8838.5815.96073.1911072.1175.3114.413.5672361542.327184.88313066446.3811.311165355.2907273197200222288996907457393.3205341249.75209.199.8136.214.95146.51257.311189.772.78.11923.23225697.8465.718.28706106529.5610.0594104.2777920.9134.15.73101.1865696.65433.52514.49.4022705.7958.7160.713092.95289.9295.8615.516767288.94824.4211111180624182368011471.88887.858.86246434.8255219.25971.7438.712.889500903.3532.0763.14451.26410.2716.7976.40833136276764.045.012180.059190376822.581126.3244.8977.1588.8879130.2797.57429.647600618213.6984.5925449.8616.84710407167180131.9817.83710908.2258.131040861547.236977.51188772.426405.01294001151180306225.09106227977224251954375303511599.94658542.681100.7326.984431.3122106.353340.9941.415181.5999262.651974.464885.508899.40555.406110.7076OpenBenchmarking.org

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: HMAC-SHA512cab30M60M90M120M150M192920001519930001529810001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt -lbz2

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: System V Message Passingbac16M32M48M64M80M10283641.1310295375.8173426960.151. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5cba2M4M6M8M10M276600010295000103840001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt -lbz2

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Atomicabc80160240320400139.80163.26349.891. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishcba20K40K60K80K100K515171073881144991. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt -lbz2

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKcba100K200K300K400K500K2558294823234833281. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt -lbz2

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Socket Activitycba20K40K60K80K100K50201.5779477.3494302.891. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptcab20K40K60K80K100K640151147771148251. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt -lbz2

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speedcab2004006008001000831.71128.81140.41. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Features 2Dba80K160K240K320K400K3569932708721. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: MEMFDcba170340510680850583.20583.83766.901. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100cba400K800K1200K1600K2000K1527158.771946110.381970590.491. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10bac500K1000K1500K2000K2500K1736416.842137824.962222180.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Image Processingab80K160K240K320K400K3569892934811. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speedacb70140210280350258.0267.4313.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardab71421283526.5432.051. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5acb500K1000K1500K2000K2500K1774418.861820534.682133650.031. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: CPU Cacheacb400K800K1200K1600K2000K1441584.051657780.821691720.801. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Zlibbac140028004200560070005549.216374.226511.981. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: NDT Mappingbac2004006008001000771.26817.52892.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Object Detectionab30K60K90K120K150K1194941039861. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUba0.93711.87422.81133.74844.6855SE +/- 0.04476, N = 144.164673.67056MIN: 3.58MIN: 3.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speedcba2468106.957.257.881. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Points2Imagecab2K4K6K8K10K8791.239474.399912.901. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Graph APIab140K280K420K560K700K6710865970491. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speedacb4080120160200161.4169.9180.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Futexabc200K400K600K800K1000K957096.201018129.241056011.641. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardbac91827364533.6836.2037.051. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Videoba30K60K90K120K150K1331001211731. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 200ab8K16K24K32K40K34680.5738092.881. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speedabc50010001500200025002013.12185.02209.81. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writingbac2M4M6M8M10M7531197805998781927931. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUba0.04910.09820.14730.19640.2455SE +/- 0.001478, N = 110.2183300.201235MIN: 0.19MIN: 0.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUab150300450600750SE +/- 9.93, N = 15705.68656.80MIN: 626MIN: 632.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: DNN - Deep Neural Networkab20K40K60K80K100K1153851078271. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Euclidean Clusterbac2004006008001000796.63842.32848.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Coreab20K40K60K80K100K88425831211. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelbca0.7011.4022.1032.8043.5052.930862.932573.115601. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speedbca60120180240300278.4285.6295.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speedbac140280420560700603.0624.0638.21. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUba160320480640800SE +/- 2.91, N = 3752.43711.38MIN: 723.13MIN: 680.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUba0.20820.41640.62460.83281.041SE +/- 0.008834, N = 60.9254780.875338MIN: 0.82MIN: 0.821. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pcba71421283530.0131.1031.701. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetacb306090120150138.07145.45145.79

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kbac4812162015.5316.1516.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kcab2468107.998.168.401. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256bca12000M24000M36000M48000M60000M5481097532057348072400576117668501. (CC) gcc options: -pthread -m64 -O3 -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Parallelba142842567058.8361.791. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Standardacb369121511.3611.6911.901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Function Callbac90K180K270K360K450K413303.53429335.83431183.831. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Randomacb200K400K600K800K1000K9204269548529593401. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic Modelacb4812162015.3014.9614.711. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kacb4812162015.8216.3316.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Stitchingab110K220K330K440K550K5081054892651. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUab100200300400500SE +/- 2.89, N = 3453.91437.40MIN: 437.57MIN: 425.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Glibc C String Functionscba13M26M39M52M65M58278938.7859352709.9160467587.401. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetcba50100150200250227.64228.16236.08

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kabc4812162016.0416.5616.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered Halfspaceacb71421283531.4730.9630.371. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4Kcba1.172.343.514.685.855.025.045.201. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: CPU Stressbac30K60K90K120K150K152455.93152655.29157803.191. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Semaphoresacb3M6M9M12M15M11840389.3512232435.5012234144.911. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pbac4812162015.5815.9716.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Pthreadbac20K40K60K80K100K87532.4087858.7190272.051. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pabc71421283530.0430.2130.981. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Parallelbac369121513.0313.2513.431. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pbca71421283527.9328.6128.781. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kabc0.07650.1530.22950.3060.38250.330.340.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speedcab300600900120015001129.31161.61163.41. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandbca81624324035.1635.3336.211. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Video On Demandbca50100150200250215.44214.38209.221. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUba0.48680.97361.46041.94722.434SE +/- 0.02056, N = 32.163462.10120MIN: 2.03MIN: 2.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastercba369121512.9813.1013.351. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallelbca130260390520650607.24624.35624.461. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4Kabc306090120150116.89119.69120.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.15Time To Compilebac2040608010094.9793.0092.48

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Cryptoacb20K40K60K80K100K105894.93107612.44108459.261. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To Compilecab50100150200250207.61203.68202.83

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Liveacb81624324033.7834.2434.571. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Liveacb306090120150149.48147.51146.071. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallelcab81624324031.9332.6832.681. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4Kcab91827364537.3337.7438.181. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livebca306090120150130.09130.88133.051. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Livebca91827364538.8238.5837.961. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: Standardcab4812162015.9616.1316.311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastcba0.73371.46742.20112.93483.66853.1913.2213.2611. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speedcba20040060080010001072.11075.51095.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetbca4080120160200174.81175.31178.52

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speedbca4812162014.314.414.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. Helensacb4812162013.7413.5713.461. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4Kcba0.53441.06881.60322.13762.6722.3272.3362.3751. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Standardacb4080120160200183.30184.88186.941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Hashacb3M6M9M12M15M12949136.2813066446.3813204487.511. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pcab369121511.3111.4211.531. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: SENDFILEabc200K400K600K800K1000K1143575.461162795.271165355.201. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speedcab2004006008001000907.0907.7924.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Readbac60M120M180M240M300M2681950002730390942731972001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512cab5000M10000M15000M20000M25000M2222889969022617583540226411752701. (CC) gcc options: -pthread -m64 -O3 -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUab100200300400500SE +/- 6.28, N = 3455.60447.36MIN: 430.98MIN: 435.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill Syncacb16K32K48K64K80K7344074573747711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: Parallelbac2040608010091.7692.4393.321. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUba0.08540.17080.25620.34160.427SE +/- 0.003364, N = 30.3795400.373237MIN: 0.33MIN: 0.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Matrix Mathabc70K140K210K280K350K335705.12341137.35341249.751. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformabc50100150200250212.62212.54209.191. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadcab36912159.819.919.971. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Platformabc81624324035.6335.6436.211. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kbca1.12732.25463.38194.50925.63654.934.955.011. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardba50100150200250216.36219.831. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compilebac306090120150148.85148.77146.51

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx264 - Scenario: Uploadcab60120180240300257.31254.90253.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallelba51015202521.9522.291. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speedbca300600900120015001172.31189.71190.21. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetcba163248648072.7072.8173.81

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallelba4080120160200187.42190.261. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastabc2468107.9988.0148.1191. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50bca61218243022.9923.2323.33

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500acb50K100K150K200K250K222945.56225697.84226231.191. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 4Kbca153045607565.2165.7066.171. (CC) gcc options: -pthread

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous Halfspacecba4812162018.2918.1818.031. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pacb71421283529.3929.5629.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: Parallelcba369121510.0610.1410.191. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonacb20406080100103.14104.28104.48MIN: 101.66 / MAX: 108.37MIN: 102.35 / MAX: 108.15MIN: 103 / MAX: 108.93

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speedacb2004006008001000917.2920.9928.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080pbca306090120150133.01134.10134.661. (CC) gcc options: -pthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pbac1.28932.57863.86795.15726.44655.665.715.731. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetacb20406080100100.57101.18101.79

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Forkingcab14K28K42K56K70K65696.6565824.5666467.951. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: NUMAbca90180270360450432.40433.52437.261. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUba100200300400500SE +/- 6.11, N = 15471.80466.73MIN: 461.64MIN: 406.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetbca110220330440550513.22514.40518.73

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: Standardacb36912159.374999.402209.475391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardacb150300450600750699.96705.80707.311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadabc2468108.628.668.711. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50bac142842567060.0960.2160.71

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: MMAPcba70014002100280035003092.953115.233124.361. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Uploadabc60120180240300292.80291.57289.921. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUab0.31940.63880.95821.27761.597SE +/- 0.00845, N = 31.419551.40563MIN: 1.24MIN: 1.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 500ba8K16K24K32K40K37076.9537442.601. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 1080pcba2040608010095.8696.0796.801. (CC) gcc options: -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kbac4812162015.3615.3915.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadecab150030004500600075006767676567021. (CXX) g++ options: -O3

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilescba60120180240300288.95286.24286.22

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUab0.26370.52740.79111.05481.3185SE +/- 0.00262, N = 31.171921.16101MIN: 0.97MIN: 0.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlycba61218243024.4224.3624.20

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 1024abc204060801001101101111. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 2048acb3060901201501111111121. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMabc200000M400000M600000M800000M1000000M7992027900608026464302408062418236801. (CC) gcc options: -pthread -m64 -O3 -ldl

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 4096cab3060901201501141151151. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Crownabc163248648071.2771.8371.89MIN: 67.7 / MAX: 79.85MIN: 68.18 / MAX: 79.7MIN: 67.5 / MAX: 80.61

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crowncab2040608010087.8588.0088.57MIN: 84.35 / MAX: 92.29MIN: 84.5 / MAX: 93.11MIN: 84.55 / MAX: 93.3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pbac2468108.798.828.861. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200bac50K100K150K200K250K244509.87245504.32246434.821. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUba0.88521.77042.65563.54084.426SE +/- 0.00278, N = 33.934363.90427MIN: 3.68MIN: 3.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lioncab120024003600480060005521550254791. (CXX) g++ options: -O3

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barebca36912159.1999.2599.2681. (CXX) g++ options: -O3

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50bac163248648071.2171.3871.74

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50bca91827364538.6338.7138.90

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4Kacb0.65251.3051.95752.613.26252.882.882.901. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Pollcab2M4M6M8M10M9500903.359551392.989566416.751. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlybac71421283532.2932.2232.07

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlycab142842567063.1463.0762.71

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandcba100200300400500451.26448.99448.211. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetbac90180270360450407.51408.72410.27

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Video On Demandcba4812162016.7916.8716.901. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragon Objacb2040608010076.1776.4176.66MIN: 73.69 / MAX: 79.67MIN: 74.48 / MAX: 78.97MIN: 74.78 / MAX: 80.24

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50bac71421283530.8030.8731.00

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Mutexcab8M16M24M32M40M36276764.0436280764.7636505304.601. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fasterbca1.13152.2633.39454.5265.65754.9985.0125.0291. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjabac4080120160200181.17180.77180.06

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Mallocacb40M80M120M160M200M190168476.52190376822.58191206280.201. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speedcab20040060080010001126.31128.41132.31. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlycab50100150200250244.89244.20243.71

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlybca2040608010077.2977.1576.92

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragon Objcab2040608010088.8989.1889.29MIN: 87.09 / MAX: 93.32MIN: 87.45 / MAX: 91.82MIN: 87.48 / MAX: 93.27

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080p 10-bitbca306090120150130.12130.27130.691. (CC) gcc options: -pthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4Kabc2040608010097.1697.1997.571. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered Halfspacecab71421283529.6529.6429.521. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: Standardab4080120160200196.25197.021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetbca50100150200250213.49213.69214.30

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragonbac2040608010084.2884.3584.59MIN: 81.06 / MAX: 88.82MIN: 81.19 / MAX: 88.18MIN: 82.73 / MAX: 90.07

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformcab100200300400500449.86448.63448.261. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.0Encoder: libx265 - Scenario: Platformcab4812162016.8416.8816.901. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMacb150000M300000M450000M600000M750000M7099654873107104071671807124632059901. (CC) gcc options: -pthread -m64 -O3 -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetcba306090120150131.98132.26132.43

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compilebca4812162017.8717.8417.81

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Memory Copyingcba2K4K6K8K10K10908.2010922.2410941.371. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetacb60120180240300257.44258.13258.17

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fillcab20K40K60K80K100K1040861041591043561. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Glibc Qsort Data Sortingbca300600900120015001546.691547.201550.531. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096bac8K16K24K32K40K36899.136965.036977.51. (CC) gcc options: -pthread -m64 -O3 -ldl

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096bca300K600K900K1200K1500K1186897.41188772.41189343.71. (CC) gcc options: -pthread -m64 -O3 -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: IO_uringabc6K12K18K24K30K26355.6226364.6326405.011. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305acb60000M120000M180000M240000M300000M2939574528702940011511802944636515701. (CC) gcc options: -pthread -m64 -O3 -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Vector Mathcba70K140K210K280K350K306225.09306299.59306730.851. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Sequential Fillabc20K40K60K80K100K1060611061471062271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Randomcab20K40K60K80K100K9772297772978611. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20acb90000M180000M270000M360000M450000M4250916309704251954375304255505599101. (CC) gcc options: -pthread -m64 -O3 -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Context Switchingbac800K1600K2400K3200K4000K3508321.293511158.353511599.941. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: x86_64 RdRandbca140K280K420K560K700K658461.44658542.68658591.721. (CC) gcc options: -std=gnu99 -O2 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 11.0.1Clients: 512abc204060801001101101101. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pabc0.16430.32860.49290.65720.82150.730.730.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUba160320480640800SE +/- 13.20, N = 13737.56708.53MIN: 712.91MIN: 616.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUba918273645SE +/- 1.54, N = 1539.8035.46MIN: 26.16MIN: 13.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUab0.51951.0391.55852.0782.5975SE +/- 0.04986, N = 152.308872.21121MIN: 1.77MIN: 1.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUab48121620SE +/- 10.58609, N = 1218.011465.37587MIN: 3.07MIN: 3.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUab714212835SE +/- 8.531398, N = 1230.2714320.668506MIN: 0.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUba3691215SE +/- 0.35622, N = 1510.055502.56877MIN: 3.3MIN: 1.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUab0.4590.9181.3771.8362.295SE +/- 0.20405, N = 152.039961.80895MIN: 1.44MIN: 1.631. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUba0.37320.74641.11961.49281.866SE +/- 0.05971, N = 151.658801.46571MIN: 1.3MIN: 1.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

Concurrent Requests: 1000

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

Concurrent Requests: 100

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

Connections: 1000

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

Connections: 100

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

209 Results Shown

John The Ripper
Stress-NG
John The Ripper
Stress-NG
John The Ripper:
  Blowfish
  WPA PSK
Stress-NG
John The Ripper
Zstd Compression
OpenCV
Stress-NG
Memcached:
  1:100
  1:10
OpenCV
Zstd Compression
ONNX Runtime
Memcached
Stress-NG:
  CPU Cache
  Zlib
Darmstadt Automotive Parallel Heterogeneous Suite
OpenCV
oneDNN
Zstd Compression
Darmstadt Automotive Parallel Heterogeneous Suite
OpenCV
Zstd Compression
Stress-NG
ONNX Runtime
OpenCV
Apache HTTP Server
Zstd Compression
RocksDB
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
OpenCV
Darmstadt Automotive Parallel Heterogeneous Suite
OpenCV
ONNX Runtime
Zstd Compression:
  8, Long Mode - Compression Speed
  8 - Compression Speed
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
AOM AV1
TensorFlow
AOM AV1:
  Speed 6 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
OpenSSL
ONNX Runtime:
  super-resolution-10 - CPU - Parallel
  yolov4 - CPU - Standard
Stress-NG
RocksDB
SPECFEM3D
AOM AV1
OpenCV
oneDNN
Stress-NG
TensorFlow
AOM AV1
SPECFEM3D
VP9 libvpx Encoding
Stress-NG:
  CPU Stress
  Semaphores
AOM AV1
Stress-NG
AOM AV1
ONNX Runtime
AOM AV1:
  Speed 6 Realtime - Bosphorus 1080p
  Speed 0 Two-Pass - Bosphorus 4K
Zstd Compression
FFmpeg:
  libx264 - Video On Demand:
    FPS
    Seconds
oneDNN
VVenC
ONNX Runtime
SVT-AV1
Build2
Stress-NG
Timed Node.js Compilation
FFmpeg:
  libx265 - Live:
    FPS
    Seconds
ONNX Runtime
SVT-AV1
FFmpeg:
  libx264 - Live:
    FPS
    Seconds
ONNX Runtime
VVenC
Zstd Compression
TensorFlow
Zstd Compression
SPECFEM3D
SVT-AV1
ONNX Runtime
Stress-NG
VP9 libvpx Encoding
Stress-NG
Zstd Compression
RocksDB
OpenSSL
oneDNN
RocksDB
ONNX Runtime
oneDNN
Stress-NG
FFmpeg:
  libx264 - Platform
  libx264 - Upload
  libx264 - Platform
AOM AV1
ONNX Runtime
Timed Godot Game Engine Compilation
FFmpeg
ONNX Runtime
Zstd Compression
TensorFlow
ONNX Runtime
VVenC
TensorFlow
nginx
dav1d
SPECFEM3D
AOM AV1
ONNX Runtime
Embree
Zstd Compression
dav1d
VP9 libvpx Encoding
TensorFlow
Stress-NG:
  Forking
  NUMA
oneDNN
TensorFlow
ONNX Runtime:
  fcn-resnet101-11 - CPU - Standard
  CaffeNet 12-int8 - CPU - Standard
FFmpeg
TensorFlow
Stress-NG
FFmpeg
oneDNN
Apache HTTP Server
dav1d
AOM AV1
Google Draco
Timed LLVM Compilation
oneDNN
Blender
MariaDB:
  1024
  2048
OpenSSL
MariaDB
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
AOM AV1
nginx
oneDNN
Google Draco
GROMACS
TensorFlow:
  CPU - 512 - ResNet-50
  CPU - 64 - ResNet-50
VP9 libvpx Encoding
Stress-NG
Blender:
  Fishy Cat - CPU-Only
  Classroom - CPU-Only
FFmpeg
TensorFlow
FFmpeg
Embree
TensorFlow
Stress-NG
VVenC
Timed LLVM Compilation
Stress-NG
Zstd Compression
Blender:
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
Embree
dav1d
SVT-AV1
SPECFEM3D
ONNX Runtime
TensorFlow
Embree
FFmpeg:
  libx265 - Platform:
    Seconds
    FPS
OpenSSL
TensorFlow
Timed FFmpeg Compilation
Stress-NG
TensorFlow
RocksDB
Stress-NG
OpenSSL:
  RSA4096:
    sign/s
    verify/s
Stress-NG
OpenSSL
Stress-NG
RocksDB:
  Seq Fill
  Update Rand
OpenSSL
Stress-NG:
  Context Switching
  x86_64 RdRand
MariaDB
AOM AV1
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
  IP Shapes 1D - bf16bf16bf16 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 3D - f32 - CPU
  IP Shapes 1D - f32 - CPU