fdsdf

AMD Ryzen 7 4800U testing with a ASRock 4X4-4000 (P1.30Q BIOS) and AMD Renoir 512MB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210101-NE-FDSDF056044
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 7 Tests
Compression Tests 2 Tests
CPU Massive 9 Tests
Creator Workloads 11 Tests
Database Test Suite 2 Tests
Encoding 3 Tests
HPC - High Performance Computing 7 Tests
Imaging 3 Tests
Machine Learning 6 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 4 Tests
Renderers 2 Tests
Server 2 Tests
Server CPU Tests 5 Tests
Video Encoding 2 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
October 09 2022
  1 Day, 8 Hours, 33 Minutes
B
October 10 2022
  7 Hours, 58 Minutes
C
October 10 2022
  7 Hours, 56 Minutes
Invert Hiding All Results Option
  16 Hours, 9 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


fdsdfOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 4800U @ 1.80GHz (8 Cores / 16 Threads)ASRock 4X4-4000 (P1.30Q BIOS)AMD Renoir/Cezanne16GB512GB TS512GMTS952T-IAMD Renoir 512MB (1750/400MHz)AMD Renoir Radeon HD AudioDELL P2415QRealtek RTL8125 2.5GbE + Realtek RTL8111/8168/8411 + Intel 8265 / 8275Ubuntu 22.045.19.0-rc6-phx-retbleed (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47)1.3.204GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionFdsdf BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103- BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026- Python 3.10.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCResult OverviewPhoronix Test Suite100%101%103%104%106%QuadRayClickHouseMobile Neural NetworkGraphicsMagickFLAC Audio EncodingAOM AV17-Zip CompressionWebP Image EncodeFacebook RocksDBUnvanquishedsrsRANTimed Erlang/OTP CompilationoneDNNC-BloscTimed PHP CompilationspaCySVT-AV1NCNNOpenFOAMTimed CPython CompilationTensorFlowOpenVINOWebP2 Image EncodeBlenderSMHasherBRL-CADTimed Wasmer CompilationY-CruncherTimed Node.js Compilation

fdsdfquadray: 1 - 1080psvt-av1: Preset 10 - Bosphorus 4Kunvanquished: 1920 x 1200 - Mediumncnn: Vulkan GPU - FastestDetgraphics-magick: HWB Color Spaceclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cachequadray: 3 - 1080psrsran: OFDM_Testmnn: squeezenetv1.1onednn: Deconvolution Batch shapes_1d - f32 - CPUaom-av1: Speed 6 Realtime - Bosphorus 1080pncnn: Vulkan GPU - shufflenet-v2onednn: IP Shapes 3D - f32 - CPUunvanquished: 1920 x 1080 - Ultragraphics-magick: Rotateaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 1080punvanquished: 2560 x 1440 - Mediumncnn: Vulkan GPU-v3-v3 - mobilenet-v3smhasher: t1ha0_aes_avx2 x86_64ncnn: CPU - yolov4-tinywebp: Quality 100unvanquished: 1920 x 1080 - Highmnn: resnet-v2-50quadray: 2 - 1080pmnn: mobilenetV3mnn: MobileNetV2_224mnn: nasnetgraphics-magick: Resizingsvt-av1: Preset 12 - Bosphorus 1080pmnn: inception-v3quadray: 5 - 1080pmnn: SqueezeNetV1.0quadray: 5 - 4Kopenvino: Person Vehicle Bike Detection FP16 - CPUncnn: CPU - mobilenetopenvino: Person Vehicle Bike Detection FP16 - CPUsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMrocksdb: Rand Readncnn: Vulkan GPU - regnety_400mrocksdb: Read While Writingwebp: Defaultaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080pncnn: Vulkan GPU-v2-v2 - mobilenet-v2unvanquished: 1920 x 1080 - Mediumonednn: IP Shapes 1D - u8s8f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 4Konednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUgraphics-magick: Swirlncnn: CPU - resnet50svt-av1: Preset 12 - Bosphorus 4Kopenvino: Vehicle Detection FP16 - CPUsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMopenvino: Vehicle Detection FP16 - CPUrocksdb: Seq Fillsvt-av1: Preset 10 - Bosphorus 1080psrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMopenvino: Person Detection FP32 - CPUaom-av1: Speed 10 Realtime - Bosphorus 4Kncnn: Vulkan GPU - resnet18unvanquished: 1920 x 1200 - Ultrasrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMunvanquished: 2560 x 1440 - Highopenvino: Person Detection FP32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMncnn: Vulkan GPU - vision_transformermnn: mobilenet-v1-1.0openvino: Machine Translation EN To DE FP16 - CPUunvanquished: 1920 x 1200 - Highopenvino: Machine Translation EN To DE FP16 - CPUcompress-7zip: Decompression Ratingencode-flac: WAV To FLACncnn: CPU - vision_transformerwebp2: Defaultopenvino: Person Detection FP16 - CPUopenfoam: motorBike - Mesh Timencnn: CPU-v3-v3 - mobilenet-v3onednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMncnn: CPU - FastestDetsvt-av1: Preset 8 - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 4Kclickhouse: 100M Rows Web Analytics Dataset, Second Rungraphics-magick: Noise-Gaussianncnn: Vulkan GPU - mnasnetblosc: blosclz shufflequadray: 2 - 4Kopenvino: Weld Porosity Detection FP16-INT8 - CPUspacy: en_core_web_lgopenvino: Weld Porosity Detection FP16-INT8 - CPUncnn: CPU - shufflenet-v2webp2: Quality 100, Compression Effort 5smhasher: FarmHash32 x86_64 AVXonednn: Recurrent Neural Network Inference - f32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Rand Fill Syncopenvino: Vehicle Detection FP16-INT8 - CPUsvt-av1: Preset 4 - Bosphorus 1080pcompress-7zip: Compression Ratingncnn: CPU - blazefaceopenvino: Vehicle Detection FP16-INT8 - CPUsvt-av1: Preset 8 - Bosphorus 4Kbuild-erlang: Time To Compileopenvino: Face Detection FP16-INT8 - CPUopenvino: Person Detection FP16 - CPUncnn: CPU - mnasnetopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 4Kncnn: CPU - squeezenet_ssdopenvino: Age Gender Recognition Retail 0013 FP16 - CPUsmhasher: fasthash32rocksdb: Update Randgraphics-magick: Sharpensmhasher: FarmHash128openvino: Face Detection FP16-INT8 - CPUbuild-python: Defaultopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 1080ptensorflow: CPU - 16 - ResNet-50ncnn: Vulkan GPU - googlenetquadray: 3 - 4Konednn: Recurrent Neural Network Training - u8s8f32 - CPUsvt-av1: Preset 4 - Bosphorus 4Kopenvino: Age Gender Recognition Retail 0013 FP16 - CPUbuild-php: Time To Compilencnn: Vulkan GPU - alexnettensorflow: CPU - 32 - AlexNetopenvino: Face Detection FP16 - CPUaom-av1: Speed 10 Realtime - Bosphorus 1080pncnn: CPU - efficientnet-b0build-python: Released Build, PGO + LTO Optimizedncnn: Vulkan GPU - resnet50y-cruncher: 1Btensorflow: CPU - 32 - ResNet-50quadray: 1 - 4Ksrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMonednn: Recurrent Neural Network Training - f32 - CPUncnn: Vulkan GPU - blazefaceunvanquished: 2560 x 1440 - Ultragraphics-magick: Enhancedonednn: IP Shapes 1D - f32 - CPUncnn: Vulkan GPU - efficientnet-b0srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMbuild-nodejs: Time To Compileopenfoam: drivaerFastback, Small Mesh Size - Mesh Timetensorflow: CPU - 16 - AlexNetsmhasher: wyhashonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUncnn: CPU - alexnetncnn: CPU-v2-v2 - mobilenet-v2unvanquished: 3840 x 2160 - Mediumaom-av1: Speed 9 Realtime - Bosphorus 1080procksdb: Rand Fillopenvino: Face Detection FP16 - CPUy-cruncher: 500Mopenfoam: drivaerFastback, Small Mesh Size - Execution Timebrl-cad: VGR Performance Metricblender: BMW27 - CPU-Onlysmhasher: MeowHash x86_64 AES-NIbuild-wasmer: Time To Compiletensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 32 - GoogLeNetopenfoam: motorBike - Execution Timerocksdb: Read Rand Write Randsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUblosc: blosclz bitshufflesmhasher: t1ha2_atoncencnn: CPU - resnet18ncnn: CPU - regnety_400mwebp: Quality 100, Highest Compressionncnn: CPU - vgg16ncnn: Vulkan GPU - squeezenet_ssdunvanquished: 3840 x 2160 - Highaom-av1: Speed 8 Realtime - Bosphorus 1080pncnn: Vulkan GPU - vgg16smhasher: SHA3-256onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUunvanquished: 3840 x 2160 - Ultraonednn: IP Shapes 3D - u8s8f32 - CPUsmhasher: Spooky32ncnn: CPU - googlenetonednn: Convolution Batch Shapes Auto - f32 - CPUspacy: en_core_web_trfaom-av1: Speed 0 Two-Pass - Bosphorus 4Kwebp2: Quality 95, Compression Effort 7webp2: Quality 75, Compression Effort 7webp: Quality 100, Losslessncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - mobilenetsmhasher: MeowHash x86_64 AES-NIsmhasher: t1ha0_aes_avx2 x86_64smhasher: FarmHash32 x86_64 AVXsmhasher: t1ha2_atoncesmhasher: FarmHash128smhasher: fasthash32smhasher: Spooky32smhasher: SHA3-256smhasher: wyhashABC13.1633.655143.55.1656557.5751.354.371280733335.10410.764921.784.5220.8993130.649612.427.14137.16.41135421.2553.939.87139.143.6835.072.6275.74317.569689165.90155.8611.2710.5840.32164.9832.9124.20346.7255722317.74110143415.7220.670.365.90141.93.3177415.978063.8535549.0443.99953.97375.573.97695239108.078160.4151.50.8620.779.16131.654.7130.04596.845.92046274.2495.394.83713.53137.5295.204394720.352277.234.834534.6181.71498.338025.414.7138598.46.9950.2432.9756.131885.894431.41.3438.5010169207.545.722.1455317.808051.803641.73266227.873.106341512.40143.3118.476195.3811912.860.876.98157.6625.334.6140.602.8813404.2629147110232079.662.0931.3282.129.6337914.044.4011.501.198637.510.9722727.54118.95711.3330.471.3351.7913.67363.74218.16173.8984.434.44106.28686.141.56110.615914.279013.11301.71318.028106.1943123.7147037.338630.9216.8210.58107.050.925493923002.5076.9281434.158190251227.8076333.93126.23713.2113.24460.354824795114.048.59082657.331569.7724.4215.883.22123.2111.4784.743.1140.67298.279.704536.7901659.14.8071429015.1330.3951.34013680.130.040.081.1521.9915.6927.99916.88720.93516.88731.06517.98025.0161299.83512.53616.8129.809155.85.1361754.6048.864.691194000004.87311.459522.794.4520.3751136.152112.827.49134.26.3129449.6556.1110.29137.841.9115.272.5595.57416.971712169.2754.7571.310.2630.33170.1233.8123.47338.3263380237.97111892715.5821.250.376.04140.83.246916.397887.634950.2945.09955.27382.272.24687072110.721162.9148.20.8521.259.26134.653.8132.94638.175.83337268.5496.554.73513.25140.4301.384457320.233278.884.744573.7982.12558.437918.334.6471797.56.8750.3773.0255.211895.94493.61.3637.9410042210.55.642.1555588.097975.323684.9265327.683.125345032.4144.2718.249195.0711902.890.866.93159.4525.054.6640.682.8513509.0129244910132387.632.131.4522.19.5465314.174.3811.521.198686.170.9782749.47119.91111.4230.681.3251.4713.67366.37718.23173.3244.444.47106.28629.971.55109.916014.192713.03301.71310.614106.1894523.8347283.818592.0816.910.56107.151.065518293014.8677.1471440.016290606227.8876261.77126.41213.2113.29462.049824645114.148.52292664.131618.3924.4715.93.23123.311.4684.743.2140.63298.669.716776.7802559.14.812729006.3530.3551.35883680.130.040.081.1519.9114.5427.92116.89720.82716.89930.58117.80925.0131299.11512.48413.3230.13140.65.6160558.8447.734.681217000004.78610.804921.424.7319.6929137.251612.227.41140.66.6130637.9853.689.85143.842.3935.062.5235.51917.295712171.41954.1351.3110.2830.33166.9532.7923.91348.5261267757.83113379815.2821.110.376.06138.23.2318416.397857.7134649.1145.11953.9372.874.06703946110.407159.1151.70.8721.129.37133.253.51314537.875.79261268.3485.924.75613.35139.4299.124486619.936273.344.774489.7980.66668.487884.074.6313596.76.9351.1072.9955.911915.984498.51.363810023210.235.712.1754838.747944.163637.99268727.523.145345792.43145.118.34193.1171890.830.876.9158.8725.144.6540.252.8613369.5129444610132090.82.1131.1542.129.6353614.094.4211.421.28614.660.982747.24119.83411.3630.711.3251.8513.57365.69818.29172.6824.464.47105.58640.731.55109.916014.280513.033001311.328106.7802723.8447158.668586.816.8410.53107.551.155515463015.7876.831436.567390523226.9976036.3125.92313.2613.29461.84827539114.448.4232666.231514.4424.3915.853.22123.5411.4984.543.1840.72298.069.723656.7922459.24.814728973.0930.3651.34993680.130.040.081.1522.2614.7727.99816.89920.93516.89231.23418.02625.0461301.4412.484OpenBenchmarking.org

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pABC48121620SE +/- 0.10, N = 313.1616.8113.321. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KABC816243240SE +/- 0.13, N = 333.6629.8130.131. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1200 - Effects Quality: MediumABC306090120150SE +/- 1.19, N = 15143.5155.8140.6

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetABC1.26232.52463.78695.04926.3115SE +/- 0.07, N = 125.165.135.61MIN: 3.72 / MAX: 6.4MIN: 3.75 / MAX: 6.05MIN: 4.66 / MAX: 6.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceABC130260390520650SE +/- 0.88, N = 35656176051. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunABC1326395265SE +/- 0.19, N = 957.5754.6058.84MIN: 3.7 / MAX: 15000MIN: 3.91 / MAX: 2857.14MIN: 3.83 / MAX: 75001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheABC1224364860SE +/- 0.51, N = 951.3548.8647.73MIN: 3.39 / MAX: 8571.43MIN: 3.81 / MAX: 4615.38MIN: 3.78 / MAX: 5454.551. ClickHouse server version 22.5.4.19 (official build).

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pABC1.05532.11063.16594.22125.2765SE +/- 0.05, N = 34.374.694.681. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestABC30M60M90M120M150MSE +/- 1325194.21, N = 151280733331194000001217000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1ABC1.14842.29683.44524.59365.742SE +/- 0.085, N = 35.1044.8734.786MIN: 4.83 / MAX: 17.44MIN: 4.73 / MAX: 6.22MIN: 4.66 / MAX: 6.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC3691215SE +/- 0.08, N = 1210.7611.4610.80MIN: 7.8MIN: 7.85MIN: 7.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pABC510152025SE +/- 0.14, N = 321.7822.7921.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2ABC1.06432.12863.19294.25725.3215SE +/- 0.07, N = 124.524.454.73MIN: 3.47 / MAX: 5.95MIN: 3.68 / MAX: 5.47MIN: 3.71 / MAX: 5.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABC510152025SE +/- 0.02, N = 320.9020.3819.69MIN: 19.26MIN: 18.83MIN: 18.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraABC306090120150SE +/- 0.50, N = 3130.6136.1137.2

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateABC110220330440550SE +/- 0.67, N = 34965215161. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABC3691215SE +/- 0.03, N = 312.4212.8212.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pABC246810SE +/- 0.07, N = 37.147.497.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 2560 x 1440 - Effects Quality: MediumABC306090120150SE +/- 1.02, N = 15137.1134.2140.6

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3ABC246810SE +/- 0.04, N = 126.416.306.60MIN: 5.42 / MAX: 7.73MIN: 5.78 / MAX: 7.35MIN: 5.59 / MAX: 7.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64ABC30K60K90K120K150KSE +/- 244.13, N = 3135421.25129449.65130637.981. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyABC1326395265SE +/- 0.24, N = 353.9356.1153.68MIN: 52.76 / MAX: 59.52MIN: 55.41 / MAX: 67.95MIN: 52.89 / MAX: 54.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100ABC3691215SE +/- 0.05, N = 39.8710.299.851. (CC) gcc options: -fvisibility=hidden -O2 -lm

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighABC306090120150SE +/- 1.33, N = 15139.1137.8143.8

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50ABC1020304050SE +/- 0.35, N = 343.6841.9142.39MIN: 42.3 / MAX: 59.3MIN: 41.13 / MAX: 89.56MIN: 41.67 / MAX: 57.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pABC1.18582.37163.55744.74325.929SE +/- 0.05, N = 155.075.275.061. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ABC0.59111.18221.77332.36442.9555SE +/- 0.027, N = 32.6272.5592.523MIN: 2.51 / MAX: 4.18MIN: 2.48 / MAX: 5.23MIN: 2.46 / MAX: 3.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABC1.29222.58443.87665.16886.461SE +/- 0.031, N = 35.7435.5745.519MIN: 5.48 / MAX: 21.23MIN: 5.39 / MAX: 11.74MIN: 5.31 / MAX: 6.721. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetABC48121620SE +/- 0.09, N = 317.5716.9717.30MIN: 17.11 / MAX: 32.86MIN: 16.65 / MAX: 22.65MIN: 17.04 / MAX: 32.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingABC150300450600750SE +/- 3.18, N = 36897127121. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pABC4080120160200SE +/- 0.65, N = 3165.90169.27171.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3ABC1326395265SE +/- 0.22, N = 355.8654.7654.14MIN: 54.66 / MAX: 164.93MIN: 53.74 / MAX: 114.32MIN: 53.36 / MAX: 68.971. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pABC0.29480.58960.88441.17921.474SE +/- 0.00, N = 31.271.301.311. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0ABC3691215SE +/- 0.11, N = 310.5810.2610.28MIN: 10.02 / MAX: 25.16MIN: 9.92 / MAX: 11.57MIN: 9.81 / MAX: 16.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4KABC0.07430.14860.22290.29720.3715SE +/- 0.00, N = 30.320.330.331. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC4080120160200SE +/- 1.36, N = 3164.98170.12166.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetABC816243240SE +/- 0.06, N = 332.9133.8132.79MIN: 32.07 / MAX: 48.87MIN: 33.19 / MAX: 34.81MIN: 31.99 / MAX: 34.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC612182430SE +/- 0.20, N = 324.2023.4723.91MIN: 16.28 / MAX: 46.34MIN: 19.71 / MAX: 47.02MIN: 17.22 / MAX: 43.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMABC80160240320400SE +/- 0.85, N = 3346.7338.3348.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadABC6M12M18M24M30MSE +/- 268758.24, N = 52557223126338023261267751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mABC246810SE +/- 0.05, N = 127.747.977.83MIN: 6.16 / MAX: 8.79MIN: 7.32 / MAX: 8.8MIN: 6.68 / MAX: 8.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingABC200K400K600K800K1000KSE +/- 5405.15, N = 31101434111892711337981. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultABC48121620SE +/- 0.14, N = 715.7215.5815.281. (CC) gcc options: -fvisibility=hidden -O2 -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABC510152025SE +/- 0.03, N = 320.6721.2521.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pABC0.08330.16660.24990.33320.4165SE +/- 0.00, N = 30.360.370.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2ABC246810SE +/- 0.06, N = 125.906.046.06MIN: 4.78 / MAX: 7.26MIN: 5.43 / MAX: 6.83MIN: 5.47 / MAX: 7.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: MediumABC306090120150SE +/- 1.59, N = 3141.9140.8138.2

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC0.74651.4932.23952.9863.7325SE +/- 0.00316, N = 33.317743.246903.23184MIN: 3.06MIN: 3.03MIN: 3.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABC48121620SE +/- 0.01, N = 315.9716.3916.391. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC2K4K6K8K10KSE +/- 10.88, N = 38063.857887.607857.71MIN: 8026.67MIN: 7865.46MIN: 7839.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlABC80160240320400SE +/- 4.33, N = 33553493461. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50ABC1122334455SE +/- 0.10, N = 349.0450.2949.11MIN: 48.32 / MAX: 50.3MIN: 49.25 / MAX: 92.13MIN: 48.61 / MAX: 49.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KABC1020304050SE +/- 0.07, N = 344.0045.1045.121. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUABC1224364860SE +/- 0.17, N = 353.9755.2753.90MIN: 26.23 / MAX: 79.93MIN: 33.76 / MAX: 82.8MIN: 30.6 / MAX: 87.571. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMABC80160240320400SE +/- 2.89, N = 3375.5382.2372.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUABC1632486480SE +/- 0.23, N = 373.9772.2474.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Sequential FillABC150K300K450K600K750KSE +/- 2562.12, N = 36952396870727039461. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pABC20406080100SE +/- 0.25, N = 3108.08110.72110.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMABC4080120160200SE +/- 1.39, N = 3160.4162.9159.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMABC306090120150SE +/- 0.35, N = 3151.5148.2151.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC0.19580.39160.58740.78320.979SE +/- 0.00, N = 30.860.850.871. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KABC510152025SE +/- 0.00, N = 320.7721.2521.121. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18ABC3691215SE +/- 0.03, N = 129.169.269.37MIN: 8.38 / MAX: 10.82MIN: 8.36 / MAX: 10.21MIN: 8.44 / MAX: 10.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1200 - Effects Quality: UltraABC306090120150SE +/- 1.58, N = 4131.6134.6133.2

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMABC1224364860SE +/- 0.10, N = 354.753.853.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 2560 x 1440 - Effects Quality: HighABC306090120150SE +/- 1.20, N = 3130.0132.9131.0

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC10002000300040005000SE +/- 13.58, N = 34596.844638.174537.87MIN: 3807.31 / MAX: 5161.95MIN: 3951.69 / MAX: 5196.3MIN: 3774.07 / MAX: 5226.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABC1.33212.66423.99635.32846.6605SE +/- 0.01821, N = 35.920465.833375.79261MIN: 5.22MIN: 5.16MIN: 5.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMABC60120180240300SE +/- 1.48, N = 3274.2268.5268.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerABC110220330440550SE +/- 1.50, N = 12495.39496.55485.92MIN: 454.64 / MAX: 937.46MIN: 469.85 / MAX: 519.55MIN: 459.84 / MAX: 511.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ABC1.08832.17663.26494.35325.4415SE +/- 0.027, N = 34.8374.7354.756MIN: 4.62 / MAX: 5.59MIN: 4.55 / MAX: 19.53MIN: 4.56 / MAX: 5.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUABC3691215SE +/- 0.05, N = 313.5313.2513.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1200 - Effects Quality: HighABC306090120150SE +/- 0.75, N = 3137.5140.4139.4

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUABC70140210280350SE +/- 1.05, N = 3295.20301.38299.12MIN: 206.56 / MAX: 367MIN: 213.65 / MAX: 322.07MIN: 209.85 / MAX: 325.381. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingABC10K20K30K40K50KSE +/- 110.79, N = 34394744573448661. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACABC510152025SE +/- 0.06, N = 520.3520.2319.941. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerABC60120180240300SE +/- 0.84, N = 3277.23278.88273.34MIN: 272.75 / MAX: 341.78MIN: 276.41 / MAX: 290.32MIN: 270.15 / MAX: 281.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultABC1.08682.17363.26044.34725.434SE +/- 0.01, N = 34.834.744.771. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUABC10002000300040005000SE +/- 23.10, N = 34534.614573.794489.79MIN: 3707.14 / MAX: 5216.88MIN: 3896.02 / MAX: 5195.82MIN: 3799.84 / MAX: 5229.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: motorBike - Mesh TimeABC2040608010081.7182.1380.671. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ABC246810SE +/- 0.00, N = 38.338.438.48MIN: 7.91 / MAX: 12.1MIN: 7.91 / MAX: 9.75MIN: 8.01 / MAX: 9.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABC2K4K6K8K10KSE +/- 10.30, N = 38025.417918.337884.07MIN: 7981.52MIN: 7896.96MIN: 7871.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABC1.06062.12123.18184.24245.303SE +/- 0.01121, N = 34.713854.647174.63135MIN: 4.27MIN: 4.31MIN: 4.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMABC20406080100SE +/- 0.45, N = 398.497.596.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetABC246810SE +/- 0.07, N = 36.996.876.93MIN: 6.62 / MAX: 7.84MIN: 6.69 / MAX: 7.26MIN: 6.72 / MAX: 10.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pABC1224364860SE +/- 0.13, N = 350.2450.3851.111. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KABC0.67951.3592.03852.7183.3975SE +/- 0.01, N = 32.973.022.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunABC1326395265SE +/- 0.62, N = 956.1355.2155.91MIN: 3.7 / MAX: 15000MIN: 3.88 / MAX: 2608.7MIN: 3.87 / MAX: 4285.711. ClickHouse server version 22.5.4.19 (official build).

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianABC4080120160200SE +/- 0.33, N = 31881891911. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetABC1.34552.6914.03655.3826.7275SE +/- 0.02, N = 125.895.905.98MIN: 4.8 / MAX: 7.01MIN: 5.24 / MAX: 6.51MIN: 5.06 / MAX: 6.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleABC10002000300040005000SE +/- 8.86, N = 34431.44493.64498.51. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4KABC0.3060.6120.9181.2241.53SE +/- 0.00, N = 31.341.361.361. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC918273645SE +/- 0.02, N = 338.5037.9438.00MIN: 31.49 / MAX: 77.17MIN: 29.14 / MAX: 75.36MIN: 30.09 / MAX: 74.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgABC2K4K6K8K10KSE +/- 60.05, N = 3101691004210023

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC50100150200250SE +/- 0.12, N = 3207.54210.50210.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2ABC1.2872.5743.8615.1486.435SE +/- 0.01, N = 35.725.645.71MIN: 5.34 / MAX: 6.86MIN: 5.4 / MAX: 6.54MIN: 5.41 / MAX: 6.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5ABC0.48830.97661.46491.95322.4415SE +/- 0.01, N = 32.142.152.171. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXABC12K24K36K48K60KSE +/- 125.99, N = 355317.8055588.0954838.741. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABC2K4K6K8K10KSE +/- 6.19, N = 38051.807975.327944.16MIN: 8018.11MIN: 7952.97MIN: 7909.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC8001600240032004000SE +/- 17.35, N = 33641.733684.903637.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Fill SyncABC6001200180024003000SE +/- 6.11, N = 32662265326871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC714212835SE +/- 0.08, N = 327.8727.6827.52MIN: 23.08 / MAX: 56.78MIN: 22.72 / MAX: 56.8MIN: 23.18 / MAX: 58.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pABC0.70761.41522.12282.83043.538SE +/- 0.008, N = 33.1063.1253.1451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingABC7K14K21K28K35KSE +/- 37.85, N = 33415134503345791. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceABC0.54681.09361.64042.18722.734SE +/- 0.00, N = 32.402.402.43MIN: 2.31 / MAX: 3.34MIN: 2.32 / MAX: 3.22MIN: 2.35 / MAX: 3.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC306090120150SE +/- 0.44, N = 3143.31144.27145.101. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KABC510152025SE +/- 0.19, N = 318.4818.2518.341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileABC4080120160200SE +/- 0.37, N = 3195.38195.07193.12

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUABC400800120016002000SE +/- 2.42, N = 31912.861902.891890.83MIN: 1858.14 / MAX: 1948.83MIN: 1854.25 / MAX: 1946.21MIN: 1836.96 / MAX: 1913.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUABC0.19580.39160.58740.78320.979SE +/- 0.00, N = 30.870.860.871. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetABC246810SE +/- 0.01, N = 36.986.936.90MIN: 6.64 / MAX: 8.35MIN: 6.68 / MAX: 8.21MIN: 6.7 / MAX: 8.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC4080120160200SE +/- 0.18, N = 3157.66159.45158.871. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC612182430SE +/- 0.03, N = 325.3325.0525.14MIN: 20.28 / MAX: 48.91MIN: 22.53 / MAX: 48.72MIN: 23.44 / MAX: 48.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KABC1.04852.0973.14554.1945.2425SE +/- 0.01, N = 34.614.664.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdABC918273645SE +/- 0.16, N = 340.6040.6840.25MIN: 39.28 / MAX: 100.58MIN: 39.81 / MAX: 41.84MIN: 39.04 / MAX: 41.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC0.6481.2961.9442.5923.24SE +/- 0.01, N = 32.882.852.86MIN: 1.82 / MAX: 29.13MIN: 1.94 / MAX: 7.85MIN: 1.79 / MAX: 15.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32ABC3K6K9K12K15KSE +/- 20.42, N = 313404.2613509.0113369.511. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomABC60K120K180K240K300KSE +/- 531.44, N = 32914712924492944461. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenABC20406080100SE +/- 0.88, N = 31021011011. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128ABC7K14K21K28K35KSE +/- 2.39, N = 332079.6632387.6332090.801. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUABC0.47480.94961.42441.89922.374SE +/- 0.00, N = 32.092.102.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultABC71421283531.3331.4531.15

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC0.4770.9541.4311.9082.385SE +/- 0.01, N = 32.122.102.12MIN: 1.06 / MAX: 26.87MIN: 1.26 / MAX: 4.29MIN: 1.09 / MAX: 5.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC3691215SE +/- 0.02024, N = 39.633799.546539.63536MIN: 9.05MIN: 9.11MIN: 9.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pABC48121620SE +/- 0.02, N = 314.0414.1714.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50ABC0.99451.9892.98353.9784.9725SE +/- 0.00, N = 34.404.384.42

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetABC3691215SE +/- 0.05, N = 1211.5011.5211.42MIN: 10.33 / MAX: 12.79MIN: 10.69 / MAX: 12.47MIN: 10.68 / MAX: 12.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4KABC0.270.540.811.081.35SE +/- 0.00, N = 31.191.191.201. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC2K4K6K8K10KSE +/- 5.30, N = 38637.518686.178614.66MIN: 8594.99MIN: 8651.11MIN: 8579.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KABC0.22050.4410.66150.8821.1025SE +/- 0.002, N = 30.9720.9780.9801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC6001200180024003000SE +/- 4.92, N = 32727.542749.472747.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileABC306090120150SE +/- 0.62, N = 3118.96119.91119.83

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetABC3691215SE +/- 0.02, N = 1211.3311.4211.36MIN: 10.56 / MAX: 12.69MIN: 10.6 / MAX: 12.34MIN: 10.62 / MAX: 12.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetABC714212835SE +/- 0.09, N = 330.4730.6830.71

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUABC0.29930.59860.89791.19721.4965SE +/- 0.00, N = 31.331.321.321. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pABC1224364860SE +/- 0.05, N = 351.7951.4751.851. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0ABC48121620SE +/- 0.00, N = 313.6713.6713.57MIN: 12.95 / MAX: 15.62MIN: 12.95 / MAX: 15.45MIN: 12.93 / MAX: 15.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedABC80160240320400363.74366.38365.70

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50ABC510152025SE +/- 0.04, N = 1218.1618.2318.29MIN: 17.15 / MAX: 19.52MIN: 17.32 / MAX: 19.43MIN: 17.37 / MAX: 19.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BABC4080120160200SE +/- 0.08, N = 3173.90173.32172.68

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50ABC1.00352.0073.01054.0145.0175SE +/- 0.00, N = 34.434.444.46

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KABC1.00582.01163.01744.02325.029SE +/- 0.01, N = 34.444.474.471. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMABC20406080100SE +/- 0.22, N = 3106.2106.2105.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABC2K4K6K8K10KSE +/- 7.23, N = 38686.148629.978640.73MIN: 8636.96MIN: 8579.92MIN: 8601.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceABC0.3510.7021.0531.4041.755SE +/- 0.00, N = 121.561.551.55MIN: 1.49 / MAX: 2.32MIN: 1.49 / MAX: 2.35MIN: 1.5 / MAX: 2.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 2560 x 1440 - Effects Quality: UltraABC20406080100SE +/- 0.46, N = 3110.6109.9109.9

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedABC4080120160200SE +/- 1.20, N = 31591601601. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABC48121620SE +/- 0.01, N = 314.2814.1914.28MIN: 13.95MIN: 13.89MIN: 13.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0ABC3691215SE +/- 0.03, N = 1213.1113.0313.03MIN: 11.9 / MAX: 14.23MIN: 12.1 / MAX: 13.9MIN: 11.94 / MAX: 13.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMABC70140210280350SE +/- 0.66, N = 3301.7301.7300.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileABC30060090012001500SE +/- 0.95, N = 31318.031310.611311.33

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeABC20406080100106.19106.19106.781. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetABC612182430SE +/- 0.02, N = 323.7123.8323.84

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashABC10K20K30K40K50KSE +/- 51.40, N = 347037.3347283.8147158.661. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABC2K4K6K8K10KSE +/- 22.45, N = 38630.928592.088586.80MIN: 8567.27MIN: 8562.35MIN: 8549.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetABC48121620SE +/- 0.05, N = 316.8216.9016.84MIN: 16.41 / MAX: 18.04MIN: 16.54 / MAX: 17.43MIN: 16.47 / MAX: 17.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2ABC3691215SE +/- 0.01, N = 310.5810.5610.53MIN: 9.93 / MAX: 11.96MIN: 9.99 / MAX: 13.06MIN: 10.03 / MAX: 12.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: MediumABC20406080100SE +/- 0.12, N = 3107.0107.1107.5

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABC1224364860SE +/- 0.03, N = 350.9251.0651.151. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random FillABC120K240K360K480K600KSE +/- 1564.28, N = 35493925518295515461. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUABC6001200180024003000SE +/- 1.33, N = 33002.503014.863015.78MIN: 2859.06 / MAX: 3143.06MIN: 2920.66 / MAX: 3180.59MIN: 2912.64 / MAX: 3118.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MABC20406080100SE +/- 0.01, N = 376.9377.1576.83

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeABC300600900120015001434.161440.021436.571. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricABC20K40K60K80K100K9025190606905231. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyABC50100150200250SE +/- 0.29, N = 3227.80227.88226.99

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIABC16K32K48K64K80KSE +/- 18.54, N = 376333.9376261.7776036.301. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileABC306090120150SE +/- 0.22, N = 3126.24126.41125.921. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetABC3691215SE +/- 0.03, N = 313.2113.2113.26

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetABC3691215SE +/- 0.03, N = 313.2413.2913.29

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: motorBike - Execution TimeABC100200300400500460.35462.05461.841. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomABC200K400K600K800K1000KSE +/- 1407.09, N = 38247958246458275391. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMABC306090120150SE +/- 0.21, N = 3114.0114.1114.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABC1122334455SE +/- 0.08, N = 348.5948.5248.42MIN: 47.93MIN: 48.04MIN: 48.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleABC6001200180024003000SE +/- 4.29, N = 32657.32664.12666.21. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceABC7K14K21K28K35KSE +/- 14.17, N = 331569.7731618.3931514.441. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18ABC612182430SE +/- 0.09, N = 324.4224.4724.39MIN: 23.93 / MAX: 47.11MIN: 24.12 / MAX: 25.52MIN: 24.07 / MAX: 26.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mABC48121620SE +/- 0.03, N = 315.8815.9015.85MIN: 15.38 / MAX: 17.56MIN: 15.44 / MAX: 25.7MIN: 15.49 / MAX: 17.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionABC0.72681.45362.18042.90723.634SE +/- 0.03, N = 33.223.233.221. (CC) gcc options: -fvisibility=hidden -O2 -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16ABC306090120150SE +/- 0.10, N = 3123.21123.30123.54MIN: 122.18 / MAX: 128.35MIN: 122.27 / MAX: 131.77MIN: 122.7 / MAX: 165.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdABC3691215SE +/- 0.02, N = 1211.4711.4611.49MIN: 10.38 / MAX: 28.65MIN: 10.74 / MAX: 22.86MIN: 10.67 / MAX: 14.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: HighABC20406080100SE +/- 0.12, N = 384.784.784.5

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pABC1020304050SE +/- 0.15, N = 343.1143.2143.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16ABC918273645SE +/- 0.02, N = 1240.6740.6340.72MIN: 39.9 / MAX: 42.18MIN: 39.96 / MAX: 41.46MIN: 40.3 / MAX: 41.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256ABC70140210280350SE +/- 1.36, N = 3298.27298.66298.061. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC3691215SE +/- 0.00123, N = 39.704539.716779.72365MIN: 9.54MIN: 9.57MIN: 9.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABC246810SE +/- 0.00466, N = 36.790166.780256.79224MIN: 6.34MIN: 6.4MIN: 6.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 3840 x 2160 - Effects Quality: UltraABC1326395265SE +/- 0.00, N = 359.159.159.2

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABC1.08332.16663.24994.33325.4165SE +/- 0.00172, N = 34.807144.812704.81470MIN: 4.69MIN: 4.71MIN: 4.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32ABC6K12K18K24K30KSE +/- 22.39, N = 329015.1329006.3528973.091. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetABC714212835SE +/- 0.05, N = 330.3930.3530.36MIN: 29.53 / MAX: 63.12MIN: 29.67 / MAX: 31.69MIN: 29.66 / MAX: 31.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC1224364860SE +/- 0.01, N = 351.3451.3651.35MIN: 50.69MIN: 50.7MIN: 50.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfABC80160240320400SE +/- 0.88, N = 3368368368

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KABC0.02930.05860.08790.11720.1465SE +/- 0.00, N = 30.130.130.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7ABC0.0090.0180.0270.0360.045SE +/- 0.00, N = 30.040.040.041. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7ABC0.0180.0360.0540.0720.09SE +/- 0.00, N = 30.080.080.081. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessABC0.25880.51760.77641.03521.294SE +/- 0.00, N = 31.151.151.151. (CC) gcc options: -fvisibility=hidden -O2 -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyABC510152025SE +/- 0.62, N = 1221.9919.9122.26MIN: 18.94 / MAX: 43.4MIN: 18.96 / MAX: 29.39MIN: 18.95 / MAX: 35.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetABC48121620SE +/- 0.89, N = 1215.6914.5414.77MIN: 13.68 / MAX: 34.77MIN: 13.65 / MAX: 16.99MIN: 13.84 / MAX: 30.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

198 Results Shown

QuadRay
SVT-AV1
Unvanquished
NCNN
GraphicsMagick
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
QuadRay
srsRAN
Mobile Neural Network
oneDNN
AOM AV1
NCNN
oneDNN
Unvanquished
GraphicsMagick
AOM AV1:
  Speed 6 Realtime - Bosphorus 4K
  Speed 4 Two-Pass - Bosphorus 1080p
Unvanquished
NCNN
SMHasher
NCNN
WebP Image Encode
Unvanquished
Mobile Neural Network
QuadRay
Mobile Neural Network:
  mobilenetV3
  MobileNetV2_224
  nasnet
GraphicsMagick
SVT-AV1
Mobile Neural Network
QuadRay
Mobile Neural Network
QuadRay
OpenVINO
NCNN
OpenVINO
srsRAN
Facebook RocksDB
NCNN
Facebook RocksDB
WebP Image Encode
AOM AV1:
  Speed 9 Realtime - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 1080p
NCNN
Unvanquished
oneDNN
AOM AV1
oneDNN
GraphicsMagick
NCNN
SVT-AV1
OpenVINO
srsRAN
OpenVINO
Facebook RocksDB
SVT-AV1
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 256-QAM
  4G PHY_DL_Test 100 PRB SISO 64-QAM
OpenVINO
AOM AV1
NCNN
Unvanquished
srsRAN
Unvanquished
OpenVINO
oneDNN
srsRAN
NCNN
Mobile Neural Network
OpenVINO
Unvanquished
OpenVINO
7-Zip Compression
FLAC Audio Encoding
NCNN
WebP2 Image Encode
OpenVINO
OpenFOAM
NCNN
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
srsRAN
NCNN
SVT-AV1
AOM AV1
ClickHouse
GraphicsMagick
NCNN
C-Blosc
QuadRay
OpenVINO
spaCy
OpenVINO
NCNN
WebP2 Image Encode
SMHasher
oneDNN
OpenVINO
Facebook RocksDB
OpenVINO
SVT-AV1
7-Zip Compression
NCNN
OpenVINO
SVT-AV1
Timed Erlang/OTP Compilation
OpenVINO:
  Face Detection FP16-INT8 - CPU
  Person Detection FP16 - CPU
NCNN
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
AOM AV1
NCNN
OpenVINO
SMHasher
Facebook RocksDB
GraphicsMagick
SMHasher
OpenVINO
Timed CPython Compilation
OpenVINO
oneDNN
AOM AV1
TensorFlow
NCNN
QuadRay
oneDNN
SVT-AV1
OpenVINO
Timed PHP Compilation
NCNN
TensorFlow
OpenVINO
AOM AV1
NCNN
Timed CPython Compilation
NCNN
Y-Cruncher
TensorFlow
QuadRay
srsRAN
oneDNN
NCNN
Unvanquished
GraphicsMagick
oneDNN
NCNN
srsRAN
Timed Node.js Compilation
OpenFOAM
TensorFlow
SMHasher
oneDNN
NCNN:
  CPU - alexnet
  CPU-v2-v2 - mobilenet-v2
Unvanquished
AOM AV1
Facebook RocksDB
OpenVINO
Y-Cruncher
OpenFOAM
BRL-CAD
Blender
SMHasher
Timed Wasmer Compilation
TensorFlow:
  CPU - 16 - GoogLeNet
  CPU - 32 - GoogLeNet
OpenFOAM
Facebook RocksDB
srsRAN
oneDNN
C-Blosc
SMHasher
NCNN:
  CPU - resnet18
  CPU - regnety_400m
WebP Image Encode
NCNN:
  CPU - vgg16
  Vulkan GPU - squeezenet_ssd
Unvanquished
AOM AV1
NCNN
SMHasher
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
Unvanquished
oneDNN
SMHasher
NCNN
oneDNN
spaCy
AOM AV1
WebP2 Image Encode:
  Quality 95, Compression Effort 7
  Quality 75, Compression Effort 7
WebP Image Encode
NCNN:
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - mobilenet