cascade lake summer 2022

2 x Intel Xeon Platinum 8280 testing with a GIGABYTE MD61-SC2-00 v01000100 (T15 BIOS) and llvmpipe on Ubuntu 21.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2206050-NE-CASCADELA50
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
C/C++ Compiler Tests 9 Tests
CPU Massive 10 Tests
Creator Workloads 10 Tests
Encoding 6 Tests
Go Language Tests 2 Tests
HPC - High Performance Computing 4 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 3 Tests
Multi-Core 11 Tests
Intel oneAPI 3 Tests
Programmer / Developer System Benchmarks 2 Tests
Raytracing 2 Tests
Renderers 2 Tests
Server 4 Tests
Server CPU Tests 8 Tests
Single-Threaded 2 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
June 04 2022
  4 Hours, 3 Minutes
B
June 05 2022
  4 Hours, 2 Minutes
C
June 05 2022
  4 Hours, 2 Minutes
Invert Hiding All Results Option
  4 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


cascade lake summer 2022OpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8280 @ 4.00GHz (56 Cores / 112 Threads)GIGABYTE MD61-SC2-00 v01000100 (T15 BIOS)Intel Sky Lake-E DMI3 Registers384GB280GB INTEL SSDPED1D280GAllvmpipeVE2282 x Intel X722 for 1GbE + 2 x QLogic FastLinQ QL41000 10/25/40/50GbEUbuntu 21.045.11.0-40-generic (x86_64)GNOME Shell 3.38.4X Server + Wayland4.5 Mesa 21.0.1 (LLVM 11.0.1 256 bits)GCC 10.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionCascade Lake Summer 2022 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102 - OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.21.04)- Python 3.9.5- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

ABCResult OverviewPhoronix Test Suite100%103%106%109%112%ONNX RuntimeApache HTTP Serverx264Stress-NGNettlelibavif avifencJava JMHperf-benchAOM AV1EtcpaksimdjsonSVT-VP9Facebook RocksDBnginxTimed MPlayer CompilationSVT-HEVCGROMACSGlibc BenchmarksOSPrayoneDNNSVT-AV1RenaissanceTensorFlow LiteOSPray Studio

cascade lake summer 2022stress-ng: CPU Cacheonnx: super-resolution-10 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardapache: 20onnx: bertsquad-12 - CPU - Standardperf-bench: Epoll Waitapache: 200apache: 1000renaissance: Scala Dottyaom-av1: Speed 8 Realtime - Bosphorus 1080ponnx: GPT-2 - CPU - Standardaom-av1: Speed 9 Realtime - Bosphorus 1080ponednn: Recurrent Neural Network Training - f32 - CPUaom-av1: Speed 6 Realtime - Bosphorus 4Kstress-ng: Atomicrenaissance: Savina Reactors.IOsvt-vp9: VMAF Optimized - Bosphorus 1080px264: Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 4Krenaissance: In-Memory Database Shootoutsvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kstress-ng: Futexavifenc: 6avifenc: 2renaissance: Apache Spark ALSsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080prenaissance: Rand Forestperf-bench: Memcpy 1MBrenaissance: Finagle HTTP Requestsaom-av1: Speed 10 Realtime - Bosphorus 4Ktensorflow-lite: Inception ResNet V2svt-vp9: Visual Quality Optimized - Bosphorus 4Kperf-bench: Sched Pipeaom-av1: Speed 6 Two-Pass - Bosphorus 1080ptensorflow-lite: Mobilenet Quantstress-ng: NUMAonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUperf-bench: Memset 1MBapache: 100svt-av1: Preset 8 - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 1080ponednn: Recurrent Neural Network Inference - u8s8f32 - CPUperf-bench: Futex Lock-Pirenaissance: Genetic Algorithm Using Jenetics + Futuresaom-av1: Speed 10 Realtime - Bosphorus 1080prenaissance: ALS Movie Lensapache: 500svt-hevc: 7 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kapache: 1onednn: IP Shapes 3D - u8s8f32 - CPUnginx: 500renaissance: Akka Unbalanced Cobwebbed Treeonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUsvt-hevc: 7 - Bosphorus 1080psvt-av1: Preset 10 - Bosphorus 4Ktensorflow-lite: NASNet Mobilesvt-av1: Preset 10 - Bosphorus 1080ponednn: IP Shapes 1D - u8s8f32 - CPUsvt-av1: Preset 4 - Bosphorus 4Knettle: aes256stress-ng: Socket Activityrocksdb: Read While Writingsimdjson: Kostyanettle: chachaetcpak: Multi-Threaded - DXT1svt-av1: Preset 12 - Bosphorus 1080ptensorflow-lite: SqueezeNetaom-av1: Speed 6 Realtime - Bosphorus 1080pstress-ng: Context Switchingnettle: sha512svt-av1: Preset 8 - Bosphorus 4Knginx: 20avifenc: 6, Losslessstress-ng: Glibc C String Functionstensorflow-lite: Inception V4onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUrenaissance: Apache Spark PageRankrenaissance: Apache Spark Bayesperf-bench: Syscall Basicrocksdb: Rand Readglibc-bench: log2etcpak: Multi-Threaded - ETC2onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUaom-av1: Speed 8 Realtime - Bosphorus 4Konednn: IP Shapes 1D - f32 - CPUetcpak: Single-Threaded - DXT1rocksdb: Read Rand Write Randsvt-hevc: 10 - Bosphorus 4Konnx: yolov4 - CPU - Standardonednn: Recurrent Neural Network Training - u8s8f32 - CPUsvt-av1: Preset 4 - Bosphorus 1080pjava-jmh: Throughputstress-ng: MMAPnettle: poly1305-aesonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUnginx: 100ospray: gravity_spheres_volume/dim_512/ao/real_timesimdjson: LargeRandnginx: 200aom-av1: Speed 4 Two-Pass - Bosphorus 4Kglibc-bench: modfospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timesvt-hevc: 10 - Bosphorus 1080pglibc-bench: cosglibc-bench: sinonednn: IP Shapes 3D - bf16bf16bf16 - CPUnginx: 1glibc-bench: pthread_oncetensorflow-lite: Mobilenet Floatospray: particle_volume/pathtracer/real_timex264: Bosphorus 4Ksimdjson: PartialTweetsavifenc: 0glibc-bench: sinhospray: gravity_spheres_volume/dim_512/scivis/real_timeonednn: Convolution Batch Shapes Auto - f32 - CPUglibc-bench: sqrtbuild-mplayer: Time To Compileglibc-bench: expglibc-bench: ffssimdjson: DistinctUserIDstress-ng: Vector Mathonednn: IP Shapes 3D - f32 - CPUstress-ng: CPU Stressstress-ng: Forkingonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonnx: ArcFace ResNet-100 - CPU - Standardospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray-studio: 2 - 1080p - 16 - Path Traceronednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUstress-ng: System V Message Passingonednn: IP Shapes 1D - bf16bf16bf16 - CPUnginx: 1000stress-ng: Memory Copyinggromacs: MPI CPU - water_GMX50_barestress-ng: Malloconednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUavifenc: 10, Losslesssvt-hevc: 1 - Bosphorus 4Ksvt-hevc: 1 - Bosphorus 1080psimdjson: TopTweetonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUstress-ng: Semaphoresetcpak: Single-Threaded - ETC2glibc-bench: ffsllstress-ng: IO_uringospray-studio: 3 - 1080p - 16 - Path Tracerperf-bench: Futex Hashospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 16 - Path Traceronednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUglibc-bench: atanhospray-studio: 1 - 1080p - 32 - Path Tracerglibc-bench: tanhstress-ng: MEMFDospray-studio: 3 - 1080p - 32 - Path Tracerglibc-bench: sincosonednn: Deconvolution Batch shapes_3d - f32 - CPUospray-studio: 1 - 1080p - 1 - Path Traceronednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUrocksdb: Update Randstress-ng: SENDFILEglibc-bench: asinhstress-ng: Glibc Qsort Data Sortingstress-ng: Matrix Mathstress-ng: Cryptoospray-studio: 2 - 1080p - 32 - Path Tracerstress-ng: x86_64 RdRandospray-studio: 2 - 1080p - 1 - Path Traceraom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4KABC26.3635937624165.826914782126245.9105438.131108.236.471016154.95869.4155.73148748.413994.2275.3116.815.57208.91111.3111794.7108.16908797.56.24955.60684016.6257.911166.116.8304486808.324.9147840.8101.741495428.94833.58583.660.34394155.72011997283.894.09121.456.04458.421628292.855.8850872.9122830.6199.19107.2892809.721.10589165114.521915.12.94845222.8889.53865535143.111.404252.7186285.4527081.4366800142.51223.772466.422206.245446.515.674008133.57498.3944.109141460.5210.316576576537643.70.47808455.0614107.5859.31645274522212624219.84862455.861871.30813.961.22929191.9212904862133.39529880.4175.47290801514532.1121686.143708.970.23244152697.4913.58260.86159676.563.536.1103108.23106.42339.5668.239958.72632.0851129191.384.7393411.68194.84746.943.6997.00525.490613.43483.929956.1534612.69815.57374.791134.28172699.692.99374105656.3247213.650.307546170016.22898564453.1083944584.983.72981164799.047011.745.836217472149.535.035156.9737.9229.014.223.695646707967.45189.2334.57384871109.61101752870019637830519.22364.4550932.2951660735.85042420.442029841.37111.234635233.14788175746859564.8228.2875690.47175964.3859728.2717084503080.375380.360.1615.341033125623084.768244015137891.33992.834.331015949.99864.6535.79137381.513313.1281.7123.325.66207.79104.812498.3114.24948927.765.94954.76886541.0268.41217.517.3930437137.823.7850114.197.391560899.134632.48607.350.34914256.21894995724.1194.55121.665.84455.132648040.857.4552001.296.4104.2922879.711.12046160705.0521598.32.90208224.5591.83365890146.7461.388392.7446323.8827478.4865194412.491234.722462.53202.255459.115.694091594.76500.5644.159142633.5910.1345777754.8636933.90.469105451.5924075.1846.91619029922287670420.15592422.178869.79213.861.22057190.7272865013131.58527871.3385.43191846052434.4981665.993740.290.230458150993.7113.7240.86159256.423.556.1788109.393107.32343.0568.424659.29542.100728929.044.782023381.23194.28247.023.7297.17125.69213.47233.92336.1989712.61215.63094.791834.3173550.283.01428105053.5547180.40.306507169716.16438533451.5993955795.953.72794164214.57033.995.811216782885.235.062116.9367.9628.884.243.678996708782.81189.1384.555864852459.71101862860347639832519.22044.442932.26661662235.94242414.272034841.46531.233725243.1499175854859190.3828.2762690.73175988.6359757.317085503057.225380.360.1621.031037835218658.188234683117695.8121473.731127.738.971121850.96943.9216.22140720.9612964.8262.04125.165.31220.7109.8911920.7114.38958464.666.07452.95582422.8270.71223.116.5868077106.424.748762.597.761502058.754770.39592.780.33654.10933199205.0297.42222.25.99470.305638045.655.7450492.3119354.6198.73104.3982800.971.09022162250.8921331.42.87069228.6689.8364248.8144.7641.369472.7876167.3526817.5565632872.441205.822408.789207.0895334.385.563998434.06489.3743.229139702.510.3455882673.1737165.70.474279460.0864142.0860.81632561621947480920.112459.642882.75513.761.238189.2292882009132.13522868.7975.40190741800042.5381671.063695.610.233227152791.8413.7440.85157843.093.576.14095108.988106.206340.5267.741559.09522.1048729196.394.779633383.49195.9847.343.796.3925.693213.53763.952726.1687712.60615.524.82534.27172356.493.00023104938.647498.90.305485168916.12648587450.2983968510.863.70853163876.767050.815.843217955200.375.049836.9537.9428.874.223.678796736736.94188.4554.570074859985.91101482860657638833019.16954.4453632.35061665035.86792419.962032241.42391.231955243.15372176062860581.2128.2569690.13176083.4959750.1217092503062.125380.360.16OpenBenchmarking.org

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheBCA61218243015.3421.0326.301. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardABC2K4K6K8K10K635910331103781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardBCA801602403204002563523761. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 20CBA5K10K15K20K25K18658.1823084.7624165.821. (CC) gcc options: -shared -fPIC -O2 -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardACB20040060080010006918238241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitBCA100020003000400050004015468347821. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 200CAB30K60K90K120K150K117695.80126245.90137891.331. (CC) gcc options: -shared -fPIC -O2 -pthread

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000AC30K60K90K120K150K105438.13121473.731. (CC) gcc options: -shared -fPIC -O2 -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyCAB20040060080010001127.71108.2992.8MIN: 851.96 / MAX: 1499.37MIN: 823.35 / MAX: 1501.19MIN: 823.44 / MAX: 1525.49

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pBAC91827364534.3336.4738.971. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardBAC2K4K6K8K10K1015910161112181. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pBCA122436486049.9950.9654.951. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCAB2004006008001000943.92869.42864.65MIN: 841.53MIN: 851.99MIN: 848.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABC2468105.735.796.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicBCA30K60K90K120K150K137381.50140720.96148748.401. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOABC3K6K9K12K15K13994.213313.112964.8MAX: 28224.11MIN: 13313.09 / MAX: 21731.4MIN: 12964.76 / MAX: 20215.75

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pCAB60120180240300262.04275.30281.701. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pABC306090120150116.81123.32125.161. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KCAB1.27352.5473.82055.0946.36755.315.575.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pBAC50100150200250207.79208.91220.701. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KBCA20406080100104.80109.89111.311. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutBCA3K6K9K12K15K12498.311920.711794.7MIN: 12420.19 / MAX: 14927.64MIN: 11638.75 / MAX: 14335.64MIN: 11669.17 / MAX: 14233.25

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KABC306090120150108.16114.24114.381. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexABC200K400K600K800K1000K908797.50948927.76958464.661. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6ACB2468106.2496.0745.9491. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2ABC122436486055.6154.7752.961. (CXX) g++ options: -O3 -fPIC -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSBAC20K40K60K80K100K86541.084016.682422.8MIN: 84631.44 / MAX: 87601.1MIN: 80662.94 / MAX: 86134.44MIN: 75074.38 / MAX: 86258.18

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pABC60120180240300257.91268.40270.701. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestCBA300600900120015001223.11217.51166.1MIN: 1084.03 / MAX: 1419.24MIN: 1052.07 / MAX: 1444.36MIN: 1051.96 / MAX: 1351.88

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBCAB4812162016.5916.8317.391. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsBCA150030004500600075007137.87106.46808.3MIN: 6576.64 / MAX: 7137.82MIN: 6637.43MIN: 6198.77

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KBCA61218243023.7824.7024.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2BCA11K22K33K44K55K50114.148762.547840.8

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KBCA2040608010097.3997.76101.741. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched PipeACB30K60K90K120K150K1495421502051560891. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pCAB36912158.758.909.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantACB100020003000400050004833.584770.394632.48

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMAACB130260390520650583.66592.78607.351. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUBAC0.07860.15720.23580.31440.3930.3491420.3439410.336000MIN: 0.33MIN: 0.33MIN: 0.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBCAB132639526554.1155.7256.221. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 100BAC20K40K60K80K100K95724.1197283.8099205.021. (CC) gcc options: -shared -fPIC -O2 -pthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pABC2040608010094.0994.5597.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABC51015202521.4521.6622.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pBCA2468105.845.996.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUCAB100200300400500470.31458.42455.13MIN: 441.87MIN: 444.49MIN: 438.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiACB14284256706263641. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesACB2K4K6K8K10K8292.88045.68040.8MIN: 8151.24 / MAX: 8455.36MIN: 7714.33 / MAX: 8198.57MIN: 7541.33 / MAX: 8322.9

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pCAB132639526555.7455.8857.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensBAC11K22K33K44K55K52001.250872.950492.3MIN: 50554.35 / MAX: 60239.74MAX: 58880.34MAX: 58284.29

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 500CA30K60K90K120K150K119354.61122830.611. (CC) gcc options: -shared -fPIC -O2 -pthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KBCA2040608010096.4098.7399.191. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 4KBCA20406080100104.29104.40107.291. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1CAB60012001800240030002800.972809.722879.711. (CC) gcc options: -shared -fPIC -O2 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUBAC0.25210.50420.75631.00841.26051.120461.105891.09022MIN: 0.78MIN: 0.77MIN: 0.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 500BCA40K80K120K160K200K160705.05162250.89165114.501. (CC) gcc options: -ldl -lpthread -lcrypt -lz -O3 -march=native

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeABC5K10K15K20K25K21915.121598.321331.4MIN: 16827.41MIN: 16497.83 / MAX: 21598.31MIN: 16358.49

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUABC0.66341.32681.99022.65363.3172.948452.902082.87069MIN: 2.7MIN: 2.64MIN: 2.631. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pABC50100150200250222.88224.55228.661. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 4KACB2040608010089.5489.8391.831. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileBAC14K28K42K56K70K65890.065535.064248.8

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 1080pACB306090120150143.11144.76146.751. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC0.3160.6320.9481.2641.581.404251.388391.36947MIN: 1.22MIN: 1.09MIN: 1.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 4KABC0.62711.25421.88132.50843.13552.7182.7442.7871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: aes256CAB140028004200560070006167.356285.456323.88MIN: 4162.19 / MAX: 10212.16MIN: 4246.46 / MAX: 10397.63MIN: 4271.48 / MAX: 10459.41. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCAB6K12K18K24K30K26817.5527081.4327478.481. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingBCA1.4M2.8M4.2M5.6M7M6519441656328766800141. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCBA0.56251.1251.68752.252.81252.442.492.501. (CXX) g++ options: -O3 -pthread

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: chachaCAB300600900120015001205.821223.771234.72MIN: 547.67 / MAX: 3646.85MIN: 554.98 / MAX: 3694.93MIN: 561.17 / MAX: 3732.391. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: DXT1CBA50010001500200025002408.792462.532466.421. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pBAC50100150200250202.25206.24207.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetBAC120024003600480060005459.115446.515334.38

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pCAB1.28032.56063.84095.12126.40155.565.675.691. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingCAB900K1800K2700K3600K4500K3998434.064008133.574091594.761. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: sha512CAB110220330440550489.37498.39500.561. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 4KCAB102030405043.2344.1144.161. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 20CAB30K60K90K120K150K139702.50141460.52142633.591. (CC) gcc options: -ldl -lpthread -lcrypt -lz -O3 -march=native

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCAB369121510.3510.3210.131. (CXX) g++ options: -O3 -fPIC -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsABC1.3M2.6M3.9M5.2M6.5M5765765.005777754.865882673.171. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4ACB8K16K24K32K40K37643.737165.736933.9

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUACB0.10760.21520.32280.43040.5380.4780800.4742790.469105MIN: 0.4MIN: 0.4MIN: 0.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCAB100200300400500460.09455.06451.59MIN: 445.03MIN: 445.99MIN: 441.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankCAB90018002700360045004142.04107.54075.1MIN: 3707.03MIN: 3638.26 / MAX: 4140.32MIN: 3574.3 / MAX: 4120.57

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCAB2004006008001000860.8859.3846.9MIN: 530.21 / MAX: 1413.48MIN: 516.49 / MAX: 1602.06MIN: 525.2 / MAX: 1088.41

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall BasicBCA4M8M12M16M20M1619029916325616164527451. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadCAB50M100M150M200M250M2194748092221262422228767041. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: log2BCA51015202520.1620.1119.851. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: ETC2BAC50010001500200025002422.182455.862459.641. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCAB2004006008001000882.76871.31869.79MIN: 861.14MIN: 852.5MIN: 850.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KCBA4812162013.7613.8613.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUCAB0.27860.55720.83581.11441.3931.238001.229291.22057MIN: 1.17MIN: 1.17MIN: 1.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: DXT1CBA4080120160200189.23190.73191.921. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomBCA600K1200K1800K2400K3000K2865013288200929048621. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KBCA306090120150131.58132.13133.391. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCBA1102203304405505225275291. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC2004006008001000880.42871.34868.80MIN: 860.19MIN: 850.22MIN: 854.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pCBA1.23122.46243.69364.92486.1565.4015.4315.4721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputCAB20000M40000M60000M80000M100000M90741800042.5490801514532.1191846052434.50

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPBCA4008001200160020001665.991671.061686.141. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: poly1305-aesCAB80016002400320040003695.613708.973740.291. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCAB0.05250.1050.15750.210.26250.2332270.2324400.230458MIN: 0.2MIN: 0.19MIN: 0.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 100BAC30K60K90K120K150K150993.71152697.49152791.841. (CC) gcc options: -ldl -lpthread -lcrypt -lz -O3 -march=native

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_timeABC4812162013.5813.7213.74

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomCAB0.19350.3870.58050.7740.96750.850.860.861. (CXX) g++ options: -O3 -pthread

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 200CBA30K60K90K120K150K157843.09159256.42159676.561. (CC) gcc options: -ldl -lpthread -lcrypt -lz -O3 -march=native

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KABC0.80331.60662.40993.21324.01653.533.553.571. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: modfBCA2468106.178806.140956.110301. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeACB20406080100108.23108.99109.39

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeCAB20406080100106.21106.42107.32

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pACB70140210280350339.56340.52343.051. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: cosBAC153045607568.4268.2467.741. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinBCA132639526559.3059.1058.731. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUCBA0.47360.94721.42081.89442.3682.104872.100702.08511MIN: 2.05MIN: 2.05MIN: 2.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1BAC6K12K18K24K30K28929.0429191.3829196.391. (CC) gcc options: -ldl -lpthread -lcrypt -lz -O3 -march=native

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: pthread_onceBCA1.0762.1523.2284.3045.384.782024.779634.739001. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatACB70014002100280035003411.683383.493381.23

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeBAC4080120160200194.28194.85195.98

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KABC112233445546.9447.0247.341. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsACB0.8371.6742.5113.3484.1853.693.703.721. (CXX) g++ options: -O3 -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0BAC2040608010097.1797.0196.391. (CXX) g++ options: -O3 -fPIC -lm

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sinhCBA61218243025.6925.6925.491. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeABC369121513.4313.4713.54

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCAB0.88941.77882.66823.55764.4473.952723.929953.92330MIN: 3.84MIN: 3.83MIN: 3.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sqrtBCA2468106.198976.168776.153461. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileABC369121512.7012.6112.61

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: expBAC4812162015.6315.5715.521. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsCBA1.08572.17143.25714.34285.42854.825304.791834.791131. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDCAB0.96751.9352.90253.874.83754.274.284.301. (CXX) g++ options: -O3 -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathCAB40K80K120K160K200K172356.49172699.69173550.281. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUBCA0.67821.35642.03462.71283.3913.014283.000232.99374MIN: 2.96MIN: 2.95MIN: 2.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressCBA20K40K60K80K100K104938.60105053.55105656.321. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingBAC10K20K30K40K50K47180.4047213.6547498.901. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC0.06920.13840.20760.27680.3460.3075460.3065070.305485MIN: 0.28MIN: 0.28MIN: 0.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCBA4008001200160020001689169717001. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCBA4812162016.1316.1616.23

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCAB2K4K6K8K10K8587856485331. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC100200300400500453.11451.60450.30MIN: 441.33MIN: 442.48MIN: 440.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingABC800K1600K2400K3200K4000K3944584.983955795.953968510.861. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUABC0.83921.67842.51763.35684.1963.729813.727943.70853MIN: 3.52MIN: 3.52MIN: 3.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CBA40K80K120K160K200K163876.76164214.50164799.041. (CC) gcc options: -ldl -lpthread -lcrypt -lz -O3 -march=native

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingABC150030004500600075007011.747033.997050.811. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareBAC1.31472.62943.94415.25886.57355.8115.8365.8431. (CXX) g++ options: -O3 -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocBAC50M100M150M200M250M216782885.23217472149.53217955200.371. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUBCA1.1392.2783.4174.5565.6955.062115.049835.03515MIN: 4.91MIN: 4.91MIN: 4.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessACB2468106.9736.9536.9361. (CXX) g++ options: -O3 -fPIC -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KACB2468107.927.947.961. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pCBA71421283528.8728.8829.011. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetACB0.9541.9082.8623.8164.774.224.224.241. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABC0.83151.6632.49453.3264.15753.695643.678993.67879MIN: 3.59MIN: 3.6MIN: 3.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresABC1.4M2.8M4.2M5.6M7M6707967.456708782.816736736.941. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: ETC2CBA4080120160200188.46189.14189.231. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: ffsllACB1.02912.05823.08734.11645.14554.573804.570074.555861. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringBCA1000K2000K3000K4000K5000K4852459.714859985.914871109.611. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerBAC2K4K6K8K10K1018610175101481. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashBCA600K1200K1800K2400K3000K2860347286065728700191. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerBCA1402804205607006396386371. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCBA2K4K6K8K10K8330832583051. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC51015202519.2219.2219.17MIN: 13.82MIN: 15.45MIN: 16.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUACB1.00242.00483.00724.00965.0124.455094.445364.44290MIN: 4.4MIN: 4.4MIN: 4.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: atanhCAB81624324032.3532.3032.271. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA4K8K12K16K20K1665016622166071. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: tanhBCA81624324035.9435.8735.851. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDBCA50010001500200025002414.272419.962420.441. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerBCA4K8K12K16K20K2034820322202981. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: sincosBCA91827364541.4741.4241.371. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC0.27780.55560.83341.11121.3891.234631.233721.23195MIN: 1.19MIN: 1.19MIN: 1.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerCBA1102203304405505245245231. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCBA0.70961.41922.12882.83843.5483.153723.149903.14788MIN: 2.94MIN: 2.94MIN: 2.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomABC40K80K120K160K200K1757461758541760621. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILEBAC200K400K600K800K1000K859190.38859564.82860581.211. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Glibc Benchmarks

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. This test profile makes use of Glibc's "benchtests" integrated benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterGlibc BenchmarksBenchmark: asinhABC71421283528.2928.2828.261. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingCAB150300450600750690.13690.47690.731. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathABC40K80K120K160K200K175964.38175988.63176083.491. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoACB13K26K39K52K65K59728.2759750.1259757.301. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA4K8K12K16K20K1709217085170841. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: x86_64 RdRandBCA110K220K330K440K550K503057.22503062.12503080.371. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerCBA1202403604806005385385381. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pABC0.0810.1620.2430.3240.4050.360.360.361. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KABC0.0360.0720.1080.1440.180.160.160.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

178 Results Shown

Stress-NG
ONNX Runtime:
  super-resolution-10 - CPU - Standard
  fcn-resnet101-11 - CPU - Standard
Apache HTTP Server
ONNX Runtime
perf-bench
Apache HTTP Server:
  200
  1000
Renaissance
AOM AV1
ONNX Runtime
AOM AV1
oneDNN
AOM AV1
Stress-NG
Renaissance
SVT-VP9
x264
AOM AV1
SVT-VP9:
  Visual Quality Optimized - Bosphorus 1080p
  VMAF Optimized - Bosphorus 4K
Renaissance
SVT-VP9
Stress-NG
libavif avifenc:
  6
  2
Renaissance
SVT-VP9
Renaissance
perf-bench
Renaissance
AOM AV1
TensorFlow Lite
SVT-VP9
perf-bench
AOM AV1
TensorFlow Lite
Stress-NG
oneDNN
perf-bench
Apache HTTP Server
SVT-AV1
AOM AV1:
  Speed 9 Realtime - Bosphorus 4K
  Speed 4 Two-Pass - Bosphorus 1080p
oneDNN
perf-bench
Renaissance
AOM AV1
Renaissance
Apache HTTP Server
SVT-HEVC
SVT-AV1
Apache HTTP Server
oneDNN
nginx
Renaissance
oneDNN
SVT-HEVC
SVT-AV1
TensorFlow Lite
SVT-AV1
oneDNN
SVT-AV1
Nettle
Stress-NG
Facebook RocksDB
simdjson
Nettle
Etcpak
SVT-AV1
TensorFlow Lite
AOM AV1
Stress-NG
Nettle
SVT-AV1
nginx
libavif avifenc
Stress-NG
TensorFlow Lite
oneDNN:
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
Renaissance:
  Apache Spark PageRank
  Apache Spark Bayes
perf-bench
Facebook RocksDB
Glibc Benchmarks
Etcpak
oneDNN
AOM AV1
oneDNN
Etcpak
Facebook RocksDB
SVT-HEVC
ONNX Runtime
oneDNN
SVT-AV1
Java JMH
Stress-NG
Nettle
oneDNN
nginx
OSPray
simdjson
nginx
AOM AV1
Glibc Benchmarks
OSPray:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
SVT-HEVC
Glibc Benchmarks:
  cos
  sin
oneDNN
nginx
Glibc Benchmarks
TensorFlow Lite
OSPray
x264
simdjson
libavif avifenc
Glibc Benchmarks
OSPray
oneDNN
Glibc Benchmarks
Timed MPlayer Compilation
Glibc Benchmarks:
  exp
  ffs
simdjson
Stress-NG
oneDNN
Stress-NG:
  CPU Stress
  Forking
oneDNN
ONNX Runtime
OSPray
OSPray Studio
oneDNN
Stress-NG
oneDNN
nginx
Stress-NG
GROMACS
Stress-NG
oneDNN
libavif avifenc
SVT-HEVC:
  1 - Bosphorus 4K
  1 - Bosphorus 1080p
simdjson
oneDNN
Stress-NG
Etcpak
Glibc Benchmarks
Stress-NG
OSPray Studio
perf-bench
OSPray Studio:
  3 - 1080p - 1 - Path Tracer
  1 - 1080p - 16 - Path Tracer
oneDNN:
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
Glibc Benchmarks
OSPray Studio
Glibc Benchmarks
Stress-NG
OSPray Studio
Glibc Benchmarks
oneDNN
OSPray Studio
oneDNN
Facebook RocksDB
Stress-NG
Glibc Benchmarks
Stress-NG:
  Glibc Qsort Data Sorting
  Matrix Math
  Crypto
OSPray Studio
Stress-NG
OSPray Studio
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 0 Two-Pass - Bosphorus 4K