12400 nnn

Intel Core i5-12400 testing with a MSI PRO Z690-A WIFI DDR4(MS-7D25) v1.0 (Dasharo coreboot+UEFI v1.0.0 BIOS) and MSI Intel ADL-S GT1 14GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209127-NE-12400NNN983
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 5 Tests
C/C++ Compiler Tests 7 Tests
Compression Tests 2 Tests
CPU Massive 10 Tests
Creator Workloads 11 Tests
Database Test Suite 6 Tests
Encoding 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Imaging 4 Tests
Machine Learning 3 Tests
Multi-Core 14 Tests
NVIDIA GPU Compute 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 2 Tests
Renderers 2 Tests
Server 6 Tests
Server CPU Tests 5 Tests
Single-Threaded 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
September 11 2022
  1 Day, 7 Hours, 59 Minutes
B
September 11 2022
  1 Day, 7 Hours, 2 Minutes
C
September 12 2022
  1 Day, 6 Hours, 42 Minutes
Invert Hiding All Results Option
  1 Day, 7 Hours, 14 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


12400 nnnOpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads)MSI PRO Z690-A WIFI DDR4(MS-7D25) v1.0 (Dasharo coreboot+UEFI v1.0.0 BIOS)Intel Device 7aa716GBWestern Digital WD_BLACK SN750 SE 500GBMSI Intel ADL-S GT1 14GB (1450MHz)Realtek ALC897DELL S2409WIntel I225-V + Intel Device 7af0Ubuntu 22.045.15.0-40-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.11.2.204GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution12400 Nnn BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9 - Python 3.10.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCResult OverviewPhoronix Test Suite100%101%103%104%105%Apache CouchDBAircrack-ngFLAC Audio EncodingWebP2 Image EncodeNatronmemtier_benchmarkTimed Wasmer CompilationOpenVINOSVT-AV1ClickHouseC-BloscGraphicsMagickRedisFacebook RocksDBWebP Image EncodeLAMMPS Molecular Dynamics SimulatorUnpacking The Linux KernelTimed PHP CompilationMobile Neural NetworkInkscapeTimed Erlang/OTP CompilationTimed CPython CompilationsrsRANDragonflydb7-Zip CompressionNCNNBlenderBRL-CADUnvanquishedPrimesieveASTC EncoderTimed Node.js Compilation

12400 nnnwebp2: Quality 95, Compression Effort 7couchdb: 300 - 3000 - 30openvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUmemtier-benchmark: Redis - 50 - 1:1couchdb: 300 - 1000 - 30memtier-benchmark: Redis - 100 - 5:1memtier-benchmark: Redis - 100 - 1:1webp2: Quality 75, Compression Effort 7svt-av1: Preset 8 - Bosphorus 4Kmemtier-benchmark: Redis - 500 - 1:1redis: GET - 1000memtier-benchmark: Redis - 500 - 1:5memtier-benchmark: Redis - 500 - 1:10rocksdb: Seq Fillgraphics-magick: Rotatedragonflydb: 200 - 1:1memtier-benchmark: Redis - 500 - 5:1memtier-benchmark: Redis - 50 - 1:5rocksdb: Update Randgraphics-magick: Sharpenmemtier-benchmark: Redis - 50 - 1:10webp: Quality 100, Lossless, Highest Compressiongraphics-magick: HWB Color Spaceaircrack-ng: encode-flac: WAV To FLACsrsran: OFDM_Testsvt-av1: Preset 10 - Bosphorus 4Knatron: Spaceshipredis: GET - 50clickhouse: 100M Rows Web Analytics Dataset, Second Runrocksdb: Rand Fillncnn: CPU - resnet18clickhouse: 100M Rows Web Analytics Dataset, Third Runrocksdb: Read While Writingmemtier-benchmark: Redis - 100 - 1:5graphics-magick: Swirlgraphics-magick: Resizingwebp2: Quality 100, Compression Effort 5memtier-benchmark: Redis - 50 - 5:1graphics-magick: Enhancedopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUncnn: CPU-v3-v3 - mobilenet-v3openvino: Person Detection FP16 - CPUmemtier-benchmark: Redis - 100 - 1:10ncnn: CPU - shufflenet-v2build-wasmer: Time To Compilerocksdb: Rand Readdragonflydb: 50 - 1:1ncnn: CPU - mnasnetncnn: CPU-v2-v2 - mobilenet-v2openvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUdragonflydb: 50 - 5:1openvino: Person Detection FP32 - CPUblosc: blosclz bitshufflemnn: nasnetncnn: CPU - regnety_400mlammps: 20k Atomssrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMmnn: SqueezeNetV1.0ncnn: CPU - efficientnet-b0ncnn: CPU - googlenetrocksdb: Rand Fill Synccompress-7zip: Decompression Ratingredis: SET - 50build-python: Released Build, PGO + LTO Optimizedredis: SET - 1000srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMdragonflydb: 50 - 1:5couchdb: 100 - 1000 - 30ncnn: CPU - blazefacesvt-av1: Preset 12 - Bosphorus 1080pncnn: Vulkan GPU - squeezenet_ssdblosc: blosclz shuffleopenvino: Person Detection FP16 - CPUsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMdragonflydb: 200 - 1:5openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUncnn: Vulkan GPU - vgg16mnn: resnet-v2-50redis: SET - 500svt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Kunpack-linux: linux-5.19.tar.xzncnn: CPU - yolov4-tinymnn: mobilenetV3dragonflydb: 200 - 5:1build-php: Time To Compilencnn: Vulkan GPU - yolov4-tinygraphics-magick: Noise-Gaussianclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheblender: BMW27 - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16 - CPUbuild-python: Defaultsvt-av1: Preset 4 - Bosphorus 1080pncnn: CPU - squeezenet_ssdopenvino: Face Detection FP16 - CPUmnn: squeezenetv1.1svt-av1: Preset 10 - Bosphorus 1080psrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMncnn: CPU - mobilenetmnn: MobileNetV2_224couchdb: 100 - 3000 - 30ncnn: Vulkan GPU - blazefaceinkscape: SVG Files To PNGmnn: mobilenet-v1-1.0srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMopenvino: Face Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUwebp: Quality 100, Losslessbuild-erlang: Time To Compilencnn: CPU - FastestDetlammps: Rhodopsin Proteinncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - resnet18openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUwebp2: Defaultblender: Fishy Cat - CPU-Onlymnn: inception-v3ncnn: CPU - vgg16rocksdb: Read Rand Write Randunvanquished: 1920 x 1080 - Highsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMblender: Classroom - CPU-Onlyncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - googlenetopenvino: Weld Porosity Detection FP16-INT8 - CPUncnn: Vulkan GPU - efficientnet-b0blender: Pabellon Barcelona - CPU-Onlycompress-7zip: Compression Ratingopenvino: Weld Porosity Detection FP16-INT8 - CPUprimesieve: 1e13ncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: CPU - vision_transformerbrl-cad: VGR Performance Metricsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMwebp: Quality 100, Highest Compressionsvt-av1: Preset 4 - Bosphorus 4Kncnn: CPU - alexnetprimesieve: 1e12ncnn: CPU - resnet50redis: GET - 500srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMbuild-nodejs: Time To Compilencnn: Vulkan GPU - alexnetsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMopenvino: Weld Porosity Detection FP16 - CPUunvanquished: 1920 x 1080 - Ultrawebp: Defaultopenvino: Weld Porosity Detection FP16 - CPUblender: Barbershop - CPU-Onlyopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUncnn: Vulkan GPU - shufflenet-v2openvino: Person Detection FP32 - CPUunvanquished: 1920 x 1080 - Mediumastcenc: Exhaustivencnn: Vulkan GPU-v2-v2 - mobilenet-v2webp: Quality 100ncnn: Vulkan GPU - vision_transformerastcenc: Fastastcenc: Mediumastcenc: Thoroughopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMwebp2: Quality 100, Lossless CompressionABC0.051077.45143.6927.822556576.02195.5552078165.772190464.390.1028.32471856.793450479.2523798342550490.8614108309242893948.932147254.392533010.016699941072394979.440.6390023816.45716.06114670000059.6242.54043025.5130.378411279.97126.6415610352371007.823897962.912216344.4716912.02332.452.571.162361490.322.7371.499464803893291442.712.543.14184.6321.633234764.71.128710.18.8027.666.345156.13.745.4110.313134399683061154.5245.3843036378.5441.83540006.6590.2860.82334.028492.0814819.23440.075003069817.747.13560.652831.9221.7523057410.7586.34886.7247.00521.551.0432780403.6669.99898.18217111.96172.851.1419.8835.25915.082579.072.358192.229448.412.242.195279.66649.0921.073.065494.51.545191.971.61110.9233.326.525640.197.27514.1212248.295.83242.625.74351.21569980205.8127.1478.06205.011362.23545.678.41293.29603.2453070712.15342.255266.42167.56200.29118824176.43.571.5178.128.66418.733896578.5144.8763.056537.52160.232.07108.917.80186.971939.29274.9114.54106.333480.06253.40.5585198.4911.487493.77118.61446.4186.13560.4973.30.010.051324.698118.4233.752182172.4168.2632053903.961906142.210.0930.9932260720.033515495.752388167.672652882.3514554369892956169.142291531.982437654.476613391012448790.70.6194422726.61316.80515300000061.6232.63893544.25125.6485784910.32129.4015287562290690.913768222.842191068.0917412.37323.112.61.132323372.492.7970.764468842943292038.332.593.2188.1421.233180042.721.148863.58.897.796.444158.53.7635.4910.463091405103047720246.623075085.754463504814.4189.1840.83338.081495.9914987.73450.77494.430638057.21554.462841.5221.5173090099.2586.89587.5366.93321.481.0372785310.8570.659903.06219111.76171.311.1419.7095.21315.022566.172.366192.991451.812.212.197281.69648.7421.1233.044492.91.545190.81.60111.6043.326.557639.1997.43515.3412208.425.82241.425.75651.021576451206.7127.6476.21205.141366.64545.928.43292.26604.152989710.39342.685266.93167.14200.59118454176.73.561.5178.0828.62518.713902966145.1762.626537.72160.332.11108.917.83186.711936.18275.0514.53106.463484.58253.20.5581198.3111.487491.53118.610546.42566.13640.4973.30.010.04134.9829.612256323.36172.7551789262.362207607.060.1030.9212291270.043218151.752590307.92437392.2415130599902768715.572179047.932383162.786322751012521142.280.6494123834.89516.13715220000062.082.63923159.5128.4787256710.34131.3215833162309526.133828062.932152632.2117212.13329.512.641.162380367.362.7672.283474093193356477.982.583.17185.4521.553177766.61.148812.38.9577.796.443157.63.7965.4510.413118403283088000248.5753070176.75447.33496653.7589.6970.83335.935497.7914905.83412.02499.93098427.567.18556.52863.5221.7353081719.2585.99687.6256.97621.331.0472807087.7170.16906.5219112.78172.271.1319.8015.24915.152557.912.377191.451449.312.32.211281.548.8120.9733.065491.31.555224.021.61111.0573.346.564642.8996.87516.9312274.195.85241.6825.86451.251577027206.2127.4477.49204.351367.26543.958.4292.89601.9953168712.78343.387267.27167.03200.92118684176.93.561.5218.128.59418.753894757.25145.1764.183536.66160.532.13109.117.82186.671937.31274.6714.55106.473483.55253.50.5579198.511.497496.91118.640446.42376.1360.4973.30.01OpenBenchmarking.org

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7ABC0.01130.02260.03390.04520.05650.050.050.041. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30AB300600900120015001077.451324.701. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUABC306090120150143.69118.42134.981. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUABC81624324027.8233.7529.61MIN: 20.55 / MAX: 72.72MIN: 19.39 / MAX: 45.57MIN: 19.73 / MAX: 45.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1ABC500K1000K1500K2000K2500K2556576.022182172.402256323.361. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30ABC4080120160200195.56168.26172.761. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000K2078165.772053903.961789262.361. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1ABC500K1000K1500K2000K2500K2190464.391906142.212207607.061. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7ABC0.02250.0450.06750.090.11250.100.090.101. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KABC71421283528.3030.9930.921. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1ABC500K1000K1500K2000K2500K2471856.792260720.032291270.041. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000ABC800K1600K2400K3200K4000K3450479.253515495.753218151.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5ABC600K1200K1800K2400K3000K2379834.002388167.672590307.901. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10ABC600K1200K1800K2400K3000K2550490.862652882.352437392.241. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Sequential FillABC300K600K900K1200K1500K1410830145543615130591. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateABC20040060080010009249899901. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1ABC600K1200K1800K2400K3000K2893948.932956169.142768715.571. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1ABC500K1000K1500K2000K2500K2147254.392291531.982179047.931. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ABC500K1000K1500K2000K2500K2533010.012437654.472383162.781. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomABC140K280K420K560K700K6699946613396322751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenABC204060801001071011011. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ABC500K1000K1500K2000K2500K2394979.442448790.702521142.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionABC0.1440.2880.4320.5760.720.630.610.641. (CC) gcc options: -fvisibility=hidden -O2 -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceABC20040060080010009009449411. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7ABC5K10K15K20K25K23816.4622726.6123834.901. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACABC4812162016.0616.8116.141. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestABC30M60M90M120M150M1467000001530000001522000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KABC142842567059.6261.6262.081. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipABC0.5851.171.7552.342.9252.52.62.6

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50ABC900K1800K2700K3600K4500K4043025.503893544.253923159.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunABC306090120150130.37125.64128.47MIN: 8.15 / MAX: 20000MIN: 8.76 / MAX: 8571.43MIN: 7.82 / MAX: 150001. ClickHouse server version 22.5.4.19 (official build).

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random FillABC200K400K600K800K1000K8411278578498725671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18ABC36912159.9710.3210.34MIN: 9.85 / MAX: 11.26MIN: 10.18 / MAX: 11.88MIN: 10.22 / MAX: 11.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunABC306090120150126.64129.40131.32MIN: 8.39 / MAX: 20000MIN: 8.55 / MAX: 20000MIN: 5.96 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingABC300K600K900K1200K1500K1561035152875615833161. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ABC500K1000K1500K2000K2500K2371007.822290690.912309526.131. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlABC801602403204003893763821. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingABC20040060080010007968228061. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5ABC0.65931.31861.97792.63723.29652.912.842.931. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1ABC500K1000K1500K2000K2500K2216344.472191068.092152632.211. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedABC40801201602001691741721. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC369121512.0212.3712.13MIN: 10.83 / MAX: 59.9MIN: 10.85 / MAX: 59.04MIN: 10.83 / MAX: 55.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC70140210280350332.45323.11329.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ABC0.5941.1881.7822.3762.972.572.602.64MIN: 2.52 / MAX: 3.39MIN: 2.56 / MAX: 3.62MIN: 2.57 / MAX: 3.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUABC0.2610.5220.7831.0441.3051.161.131.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ABC500K1000K1500K2000K2500K2361490.322323372.492380367.361. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2ABC0.62781.25561.88342.51123.1392.732.792.76MIN: 2.7 / MAX: 3.57MIN: 2.75 / MAX: 3.84MIN: 2.73 / MAX: 41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileABC163248648071.5070.7672.281. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadABC10M20M30M40M50M4648038946884294474093191. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1ABC700K1400K2100K2800K3500K3291442.713292038.333356477.981. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetABC0.58281.16561.74842.33122.9142.542.592.58MIN: 2.5 / MAX: 3.47MIN: 2.55 / MAX: 3.59MIN: 2.54 / MAX: 3.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2ABC0.721.442.162.883.63.143.203.17MIN: 3.08 / MAX: 4.08MIN: 3.13 / MAX: 4.26MIN: 3.11 / MAX: 4.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUABC4080120160200184.63188.14185.45MIN: 166.48 / MAX: 277MIN: 168.53 / MAX: 283.55MIN: 166.51 / MAX: 234.41. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUABC51015202521.6321.2321.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1ABC700K1400K2100K2800K3500K3234764.703180042.723177766.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC0.25650.5130.76951.0261.28251.121.141.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleABC2K4K6K8K10K8710.18863.58812.31. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetABC36912158.8028.8908.957MIN: 8.73 / MAX: 16.46MIN: 8.82 / MAX: 10.27MIN: 8.89 / MAX: 9.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mABC2468107.667.797.79MIN: 7.57 / MAX: 8.69MIN: 7.69 / MAX: 8.98MIN: 7.7 / MAX: 8.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsABC2468106.3456.4446.4431. (CXX) g++ options: -O3 -lm -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMABC4080120160200156.1158.5157.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0ABC0.85411.70822.56233.41644.27053.7403.7633.796MIN: 3.67 / MAX: 4.03MIN: 3.68 / MAX: 3.98MIN: 3.73 / MAX: 4.391. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0ABC1.23532.47063.70594.94126.17655.415.495.45MIN: 5.34 / MAX: 6.81MIN: 5.41 / MAX: 6.75MIN: 5.38 / MAX: 6.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetABC369121510.3110.4610.41MIN: 10.15 / MAX: 17.15MIN: 10.34 / MAX: 11.93MIN: 10.29 / MAX: 12.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Fill SyncABC70014002100280035003134309131181. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingABC9K18K27K36K45K3996840510403281. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50ABC700K1400K2100K2800K3500K3061154.53047720.03088000.01. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedABC50100150200250245.38246.62248.58

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000ABC700K1400K2100K2800K3500K3036378.503075085.753070176.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMABC100200300400500441.8446.0447.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5ABC800K1600K2400K3200K4000K3540006.653504814.413496653.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30ABC2040608010090.2989.1889.701. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceABC0.18680.37360.56040.74720.9340.820.830.83MIN: 0.8 / MAX: 1.6MIN: 0.82 / MAX: 1.01MIN: 0.81 / MAX: 1.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pABC70140210280350334.03338.08335.941. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdABC110220330440550492.08495.99497.79MIN: 470.25 / MAX: 542.76MIN: 469.78 / MAX: 577.85MIN: 474.87 / MAX: 575.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleABC3K6K9K12K15K14819.214987.714905.81. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUABC70014002100280035003440.073450.773412.02MIN: 3275.56 / MAX: 3921.02MIN: 3109.25 / MAX: 5914.12MIN: 3188.48 / MAX: 4927.31. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMABC110220330440550500.0494.4499.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5ABC700K1400K2100K2800K3500K3069817.743063805.003098427.561. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUABC2468107.137.217.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUABC120240360480600560.65554.46556.50MIN: 501.92 / MAX: 644.74MIN: 501.44 / MAX: 649.37MIN: 502.62 / MAX: 638.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16ABC60012001800240030002831.922841.522863.52MIN: 2703.51 / MAX: 3289.07MIN: 2811.41 / MAX: 2910.81MIN: 2829.3 / MAX: 3181.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50ABC51015202521.7521.5221.74MIN: 21.38 / MAX: 22.91MIN: 21.2 / MAX: 29.16MIN: 21.36 / MAX: 28.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500ABC700K1400K2100K2800K3500K3057410.753090099.253081719.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pABC2040608010086.3586.9086.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KABC2040608010086.7287.5487.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzABC2468107.0056.9336.976

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyABC51015202521.5521.4821.33MIN: 21.35 / MAX: 21.92MIN: 21.23 / MAX: 21.95MIN: 21.15 / MAX: 21.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ABC0.23560.47120.70680.94241.1781.0431.0371.047MIN: 1.03 / MAX: 1.85MIN: 1.02 / MAX: 1.89MIN: 1.03 / MAX: 1.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1ABC600K1200K1800K2400K3000K2780403.662785310.852807087.711. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileABC163248648069.9970.6670.16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyABC2004006008001000898.18903.06906.50MIN: 852.8 / MAX: 953.89MIN: 870.7 / MAX: 944.82MIN: 874.76 / MAX: 9441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianABC501001502002502172192191. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheABC306090120150111.96111.76112.78MIN: 7 / MAX: 12000MIN: 5.72 / MAX: 10000MIN: 6.92 / MAX: 150001. ClickHouse server version 22.5.4.19 (official build).

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyABC4080120160200172.85171.31172.27

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC0.25650.5130.76951.0261.28251.141.141.13MIN: 1.01 / MAX: 7.24MIN: 1.01 / MAX: 7.26MIN: 1.01 / MAX: 7.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultABC51015202519.8819.7119.80

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pABC1.18332.36663.54994.73325.91655.2595.2135.2491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdABC4812162015.0815.0215.15MIN: 14.92 / MAX: 16.65MIN: 14.9 / MAX: 16.5MIN: 14.99 / MAX: 16.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUABC60012001800240030002579.072566.172557.91MIN: 2416.44 / MAX: 4596.08MIN: 2417.58 / MAX: 2783.8MIN: 2421.24 / MAX: 2865.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1ABC0.53481.06961.60442.13922.6742.3582.3662.377MIN: 2.31 / MAX: 2.57MIN: 2.31 / MAX: 2.55MIN: 2.32 / MAX: 2.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pABC4080120160200192.23192.99191.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMABC100200300400500448.4451.8449.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetABC369121512.2412.2112.30MIN: 12.05 / MAX: 12.47MIN: 12.05 / MAX: 12.51MIN: 12.11 / MAX: 12.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABC0.49750.9951.49251.992.48752.1952.1972.211MIN: 2.04 / MAX: 2.46MIN: 2.04 / MAX: 4.54MIN: 2.05 / MAX: 9.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30ABC60120180240300279.67281.70281.501. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceABC112233445549.0948.7448.81MIN: 43.63 / MAX: 55.4MIN: 41.9 / MAX: 57.51MIN: 43.47 / MAX: 56.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Inkscape

Inkscape is an open-source vector graphics editor. This test profile times how long it takes to complete various operations by Inkscape. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGABC51015202521.0721.1220.971. Inkscape 1.1.2 (0a00cf5339, 2022-02-04)

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0ABC0.68961.37922.06882.75843.4483.0653.0443.065MIN: 2.85 / MAX: 3.31MIN: 2.83 / MAX: 3.61MIN: 2.84 / MAX: 3.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMABC110220330440550494.5492.9491.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUABC0.34880.69761.04641.39521.7441.541.541.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC110022003300440055005191.975190.805224.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessABC0.36230.72461.08691.44921.81151.611.601.611. (CC) gcc options: -fvisibility=hidden -O2 -lm

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileABC20406080100110.92111.60111.06

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetABC0.75151.5032.25453.0063.75753.323.323.34MIN: 3.29 / MAX: 3.55MIN: 3.28 / MAX: 3.53MIN: 3.3 / MAX: 3.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinABC2468106.5256.5576.5641. (CXX) g++ options: -O3 -lm -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetABC140280420560700640.10639.19642.89MIN: 622.52 / MAX: 676.1MIN: 618.76 / MAX: 730.12MIN: 623.67 / MAX: 765.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetABC2040608010097.2797.4396.87MIN: 91.44 / MAX: 103.85MIN: 91.79 / MAX: 108.05MIN: 90.1 / MAX: 104.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18ABC110220330440550514.12515.34516.93MIN: 485.62 / MAX: 550.54MIN: 489.32 / MAX: 542.62MIN: 493.25 / MAX: 551.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC3K6K9K12K15K12248.2912208.4212274.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultABC1.31632.63263.94895.26526.58155.835.825.851. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyABC50100150200250242.60241.40241.68

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3ABC61218243025.7425.7625.86MIN: 24.6 / MAX: 32.94MIN: 25.05 / MAX: 33.17MIN: 25.64 / MAX: 33.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16ABC122436486051.2051.0251.25MIN: 50.92 / MAX: 52.86MIN: 50.73 / MAX: 52.84MIN: 51 / MAX: 52.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomABC300K600K900K1200K1500K1569980157645115770271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighABC50100150200250205.8206.7206.2

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMABC306090120150127.1127.6127.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyABC100200300400500478.06476.21477.49

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetABC50100150200250205.01205.14204.35MIN: 193.19 / MAX: 221.22MIN: 195.85 / MAX: 226.66MIN: 192.76 / MAX: 215.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50ABC300600900120015001362.231366.641367.26MIN: 1325.99 / MAX: 1412.93MIN: 1340.24 / MAX: 1424.11MIN: 1330.65 / MAX: 1405.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetABC120240360480600545.67545.92543.95MIN: 525.83 / MAX: 575.66MIN: 527.25 / MAX: 576.91MIN: 517.53 / MAX: 571.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC2468108.418.438.40MIN: 7.46 / MAX: 48.89MIN: 7.47 / MAX: 49.77MIN: 7.47 / MAX: 48.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0ABC60120180240300293.29292.26292.89MIN: 274.15 / MAX: 323.92MIN: 276.12 / MAX: 312.66MIN: 274.99 / MAX: 320.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyABC130260390520650603.24604.10601.99

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingABC11K22K33K44K55K5307052989531681. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUABC150300450600750712.15710.39712.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13ABC70140210280350342.26342.69343.391. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mABC60120180240300266.42266.93267.27MIN: 259.19 / MAX: 280.54MIN: 259.65 / MAX: 277.8MIN: 260.17 / MAX: 282.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3ABC4080120160200167.56167.14167.03MIN: 157.98 / MAX: 177.45MIN: 157.61 / MAX: 177.49MIN: 158.39 / MAX: 178.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerABC4080120160200200.29200.59200.92MIN: 198.86 / MAX: 206.5MIN: 198.98 / MAX: 206.57MIN: 199.03 / MAX: 207.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricABC30K60K90K120K150K1188241184541186841. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMABC4080120160200176.4176.7176.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionABC0.80331.60662.40993.21324.01653.573.563.561. (CC) gcc options: -fvisibility=hidden -O2 -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KABC0.34220.68441.02661.36881.7111.5171.5171.5211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetABC2468108.108.088.10MIN: 8 / MAX: 9.18MIN: 8.01 / MAX: 9.15MIN: 8.01 / MAX: 9.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12ABC71421283528.6628.6328.591. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50ABC51015202518.7318.7118.75MIN: 18.54 / MAX: 20.36MIN: 18.55 / MAX: 20.43MIN: 18.58 / MAX: 25.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500ABC800K1600K2400K3200K4000K3896578.503902966.003894757.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMABC306090120150144.8145.1145.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileABC160320480640800763.06762.63764.18

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetABC120240360480600537.52537.72536.66MIN: 526.68 / MAX: 590.41MIN: 523.15 / MAX: 586.13MIN: 526.49 / MAX: 573.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMABC4080120160200160.2160.3160.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC71421283532.0732.1132.13MIN: 28.53 / MAX: 85.19MIN: 28.55 / MAX: 100.34MIN: 28.62 / MAX: 99.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraABC20406080100108.9108.9109.1

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultABC4812162017.8017.8317.821. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC4080120160200186.97186.71186.671. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyABC4008001200160020001939.291936.181937.31

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC60120180240300274.91275.05274.671. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUABC4812162014.5414.5314.55MIN: 13.06 / MAX: 59.4MIN: 13 / MAX: 58.73MIN: 12.97 / MAX: 25.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2ABC20406080100106.33106.46106.47MIN: 102.28 / MAX: 113.65MIN: 103.17 / MAX: 112.04MIN: 103.09 / MAX: 113.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUABC70014002100280035003480.063484.583483.55MIN: 3148.94 / MAX: 5906.68MIN: 3311.7 / MAX: 3906.03MIN: 3230.95 / MAX: 4583.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: MediumABC60120180240300253.4253.2253.5

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveABC0.12570.25140.37710.50280.62850.55850.55810.55791. (CXX) g++ options: -O3 -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2ABC4080120160200198.49198.31198.50MIN: 186.14 / MAX: 224.78MIN: 186.96 / MAX: 210.08MIN: 187.82 / MAX: 207.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100ABC369121511.4811.4811.491. (CC) gcc options: -fvisibility=hidden -O2 -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerABC160032004800640080007493.777491.537496.91MIN: 7376.2 / MAX: 7952.36MIN: 7393.35 / MAX: 7743.27MIN: 7397.5 / MAX: 7769.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastABC306090120150118.61118.61118.641. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumABC112233445546.4246.4346.421. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughABC2468106.13566.13646.13601. (CXX) g++ options: -O3 -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC0.11030.22060.33090.44120.55150.490.490.49MIN: 0.45 / MAX: 3.36MIN: 0.45 / MAX: 6.14MIN: 0.44 / MAX: 6.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMABC163248648073.373.373.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless CompressionABC0.00230.00460.00690.00920.01150.010.010.011. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

172 Results Shown

WebP2 Image Encode
Apache CouchDB
OpenVINO:
  Vehicle Detection FP16 - CPU:
    FPS
    ms
memtier_benchmark
Apache CouchDB
memtier_benchmark:
  Redis - 100 - 5:1
  Redis - 100 - 1:1
WebP2 Image Encode
SVT-AV1
memtier_benchmark
Redis
memtier_benchmark:
  Redis - 500 - 1:5
  Redis - 500 - 1:10
Facebook RocksDB
GraphicsMagick
Dragonflydb
memtier_benchmark:
  Redis - 500 - 5:1
  Redis - 50 - 1:5
Facebook RocksDB
GraphicsMagick
memtier_benchmark
WebP Image Encode
GraphicsMagick
Aircrack-ng
FLAC Audio Encoding
srsRAN
SVT-AV1
Natron
Redis
ClickHouse
Facebook RocksDB
NCNN
ClickHouse
Facebook RocksDB
memtier_benchmark
GraphicsMagick:
  Swirl
  Resizing
WebP2 Image Encode
memtier_benchmark
GraphicsMagick
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
NCNN
OpenVINO
memtier_benchmark
NCNN
Timed Wasmer Compilation
Facebook RocksDB
Dragonflydb
NCNN:
  CPU - mnasnet
  CPU-v2-v2 - mobilenet-v2
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
Dragonflydb
OpenVINO
C-Blosc
Mobile Neural Network
NCNN
LAMMPS Molecular Dynamics Simulator
srsRAN
Mobile Neural Network
NCNN:
  CPU - efficientnet-b0
  CPU - googlenet
Facebook RocksDB
7-Zip Compression
Redis
Timed CPython Compilation
Redis
srsRAN
Dragonflydb
Apache CouchDB
NCNN
SVT-AV1
NCNN
C-Blosc
OpenVINO
srsRAN
Dragonflydb
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
NCNN
Mobile Neural Network
Redis
SVT-AV1:
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 4K
Unpacking The Linux Kernel
NCNN
Mobile Neural Network
Dragonflydb
Timed PHP Compilation
NCNN
GraphicsMagick
ClickHouse
Blender
OpenVINO
Timed CPython Compilation
SVT-AV1
NCNN
OpenVINO
Mobile Neural Network
SVT-AV1
srsRAN
NCNN
Mobile Neural Network
Apache CouchDB
NCNN
Inkscape
Mobile Neural Network
srsRAN
OpenVINO:
  Face Detection FP16 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
WebP Image Encode
Timed Erlang/OTP Compilation
NCNN
LAMMPS Molecular Dynamics Simulator
NCNN:
  Vulkan GPU - mobilenet
  Vulkan GPU - FastestDet
  Vulkan GPU - resnet18
OpenVINO
WebP2 Image Encode
Blender
Mobile Neural Network
NCNN
Facebook RocksDB
Unvanquished
srsRAN
Blender
NCNN:
  Vulkan GPU - mnasnet
  Vulkan GPU - resnet50
  Vulkan GPU - googlenet
OpenVINO
NCNN
Blender
7-Zip Compression
OpenVINO
Primesieve
NCNN:
  Vulkan GPU - regnety_400m
  Vulkan GPU-v3-v3 - mobilenet-v3
  CPU - vision_transformer
BRL-CAD
srsRAN
WebP Image Encode
SVT-AV1
NCNN
Primesieve
NCNN
Redis
srsRAN
Timed Node.js Compilation
NCNN
srsRAN
OpenVINO
Unvanquished
WebP Image Encode
OpenVINO
Blender
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
NCNN
OpenVINO
Unvanquished
ASTC Encoder
NCNN
WebP Image Encode
NCNN
ASTC Encoder:
  Fast
  Medium
  Thorough
OpenVINO
srsRAN
WebP2 Image Encode