dddA

Intel Core i9-10900K testing with a Gigabyte Z490 AORUS MASTER (F21c BIOS) and Sapphire AMD Radeon RX 5600 XT 6GB on Ubuntu 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209101-PTS-DDDA360148
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
A
September 09 2022
  5 Hours, 40 Minutes
B
September 10 2022
  6 Hours, 3 Minutes
C
September 10 2022
  6 Hours, 3 Minutes
Invert Behavior (Only Show Selected Data)
  5 Hours, 55 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


dddAOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10900K @ 5.30GHz (10 Cores / 20 Threads)Gigabyte Z490 AORUS MASTER (F21c BIOS)Intel Comet Lake PCH16GBSamsung SSD 970 EVO 500GBSapphire AMD Radeon RX 5600 XT 6GB (1780/875MHz)Realtek ALC1220MX279Intel I225-V + Intel Comet Lake PCH CNVi WiFiUbuntu 21.105.17.0-051700rc7daily20220309-generic (x86_64)GNOME Shell 40.5X Server 1.20.13 + Wayland4.6 Mesa 21.2.6 (LLVM 12.0.1)1.2.182GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionDddA BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.6 - GLAMOR - BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-4E4111U-X4B - Python 3.9.7- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCResult OverviewPhoronix Test Suite100%111%123%134%145%etcdNatronRedisGraphicsMagickMobile Neural NetworkNode.js V8 Web Tooling BenchmarkWebP Image EncodeSVT-AV1memtier_benchmarkUnpacking The Linux KernelC-BloscPrimesieveTimed Wasmer CompilationFLAC Audio EncodingFacebook RocksDBTimed CPython CompilationsrsRANNCNN7-Zip CompressionGravityMarkBlenderUnvanquishedDragonflydbWebP2 Image EncodeTimed MPlayer CompilationTimed Erlang/OTP CompilationBRL-CADLAMMPS Molecular Dynamics SimulatorOpenVINOASTC EncoderAircrack-ngTimed Node.js Compilation

dddAetcd: RANGE - 50 - 100 - Average Latencyetcd: RANGE - 50 - 100etcd: PUT - 50 - 100 - Average Latencyetcd: RANGE - 100 - 100etcd: PUT - 50 - 100etcd: RANGE - 100 - 100 - Average Latencyetcd: PUT - 100 - 100 - Average Latencyetcd: PUT - 500 - 100etcd: RANGE - 500 - 100etcd: PUT - 100 - 100etcd: RANGE - 500 - 100 - Average Latencyetcd: PUT - 500 - 100 - Average Latencyredis: LPOP - 50redis: LPOP - 500etcd: RANGE - 500 - 1000 - Average Latencyetcd: RANGE - 500 - 1000etcd: RANGE - 50 - 1000etcd: RANGE - 50 - 1000 - Average Latencyetcd: PUT - 500 - 1000etcd: PUT - 500 - 1000 - Average Latencyetcd: RANGE - 100 - 1000redis: SET - 1000etcd: RANGE - 100 - 1000 - Average Latencyredis: GET - 50graphics-magick: Rotateetcd: PUT - 100 - 1000etcd: PUT - 100 - 1000 - Average Latencyetcd: PUT - 50 - 1000 - Average Latencymnn: mobilenetV3etcd: PUT - 50 - 1000redis: SET - 50natron: Spaceshipredis: LPUSH - 1000mnn: MobileNetV2_224redis: LPUSH - 500mnn: nasnetmemtier-benchmark: Redis - 50 - 1:1graphics-magick: HWB Color Spacencnn: CPU-v2-v2 - mobilenet-v2memtier-benchmark: Redis - 50 - 1:5ncnn: CPU - FastestDetredis: SADD - 50webp: Quality 100, Losslesssvt-av1: Preset 12 - Bosphorus 4Kmemtier-benchmark: Redis - 500 - 5:1svt-av1: Preset 10 - Bosphorus 4Kredis: SADD - 1000rocksdb: Read While Writingredis: GET - 1000memtier-benchmark: Redis - 50 - 1:10ncnn: CPU - shufflenet-v2mnn: inception-v3svt-av1: Preset 12 - Bosphorus 1080pncnn: CPU - blazefacemnn: mobilenet-v1-1.0node-web-tooling: memtier-benchmark: Redis - 100 - 5:1memtier-benchmark: Redis - 100 - 1:10svt-av1: Preset 8 - Bosphorus 4Kgraphics-magick: Resizingmemtier-benchmark: Redis - 50 - 5:1memtier-benchmark: Redis - 100 - 1:5redis: LPUSH - 50webp: Quality 100, Highest Compressionopenvino: Vehicle Detection FP16-INT8 - CPUredis: LPOP - 1000openvino: Vehicle Detection FP16-INT8 - CPUwebp: Defaultsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMrocksdb: Rand Fill Syncncnn: Vulkan GPU - FastestDetwebp: Quality 100memtier-benchmark: Redis - 100 - 1:1memtier-benchmark: Redis - 500 - 1:10redis: SADD - 500ncnn: CPU - googlenetdragonflydb: 50 - 5:1srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMrocksdb: Seq Fillsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMwebp2: Defaultblosc: blosclz bitshufflencnn: CPU - regnety_400mopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUgraphics-magick: Swirlsrsran: OFDM_Testprimesieve: 1e12ncnn: CPU - mnasnetredis: SET - 500srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMunvanquished: 1920 x 1080 - Highprimesieve: 1e13openvino: Person Detection FP32 - CPUunpack-linux: linux-5.19.tar.xzgraphics-magick: Noise-Gaussiansrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMncnn: Vulkan GPU - googlenetopenvino: Person Detection FP16 - CPUmnn: squeezenetv1.1openvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUdragonflydb: 50 - 1:5mnn: resnet-v2-50webp: Quality 100, Lossless, Highest Compressionbuild-wasmer: Time To Compilesrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMncnn: Vulkan GPU - blazefacesrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMblender: Classroom - CPU-Onlymnn: SqueezeNetV1.0gravitymark: 1920 x 1080 - OpenGLncnn: CPU - efficientnet-b0encode-flac: WAV To FLACdragonflydb: 200 - 1:5rocksdb: Rand Fillsvt-av1: Preset 8 - Bosphorus 1080pblosc: blosclz shufflesrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMopenvino: Machine Translation EN To DE FP16 - CPUbuild-python: Released Build, PGO + LTO Optimizedgraphics-magick: Enhancedopenvino: Machine Translation EN To DE FP16 - CPUncnn: CPU-v3-v3 - mobilenet-v3compress-7zip: Decompression Ratingmemtier-benchmark: Redis - 500 - 1:1ncnn: Vulkan GPU-v2-v2 - mobilenet-v2lammps: Rhodopsin Proteinsvt-av1: Preset 4 - Bosphorus 1080pncnn: CPU - mobilenetsvt-av1: Preset 10 - Bosphorus 1080pblender: BMW27 - CPU-Onlyopenvino: Vehicle Detection FP16 - CPUunvanquished: 1920 x 1080 - Mediumopenvino: Vehicle Detection FP16 - CPUdragonflydb: 200 - 1:1redis: GET - 500blender: Barbershop - CPU-Onlyncnn: Vulkan GPU-v3-v3 - mobilenet-v3memtier-benchmark: Redis - 500 - 1:5dragonflydb: 200 - 5:1ncnn: CPU - resnet18ncnn: CPU - yolov4-tinyncnn: Vulkan GPU - resnet18blender: Pabellon Barcelona - CPU-Onlyopenvino: Face Detection FP16 - CPUrocksdb: Rand Readrocksdb: Read Rand Write Randncnn: Vulkan GPU - squeezenet_ssdunvanquished: 1920 x 1080 - Ultrancnn: Vulkan GPU - alexnetopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUncnn: CPU - squeezenet_ssdopenvino: Face Detection FP16 - CPUsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMbuild-python: Defaultgraphics-magick: Sharpenopenvino: Face Detection FP16-INT8 - CPUncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - shufflenet-v2openvino: Face Detection FP16-INT8 - CPUcompress-7zip: Compression Ratingbuild-mplayer: Time To Compilerocksdb: Update Randncnn: CPU - vision_transformerwebp2: Quality 100, Compression Effort 5blender: Fishy Cat - CPU-Onlyncnn: Vulkan GPU - mnasnetncnn: CPU - resnet50openvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUncnn: Vulkan GPU - yolov4-tinybrl-cad: VGR Performance Metricbuild-erlang: Time To Compileopenvino: Age Gender Recognition Retail 0013 FP16 - CPUastcenc: Mediumastcenc: Fastncnn: Vulkan GPU - vision_transformerdragonflydb: 50 - 1:1ncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - efficientnet-b0ncnn: CPU - alexnetastcenc: Thoroughncnn: Vulkan GPU - mobilenetlammps: 20k Atomsopenvino: Weld Porosity Detection FP16-INT8 - CPUbuild-nodejs: Time To Compilencnn: CPU - vgg16ncnn: Vulkan GPU - vgg16svt-av1: Preset 4 - Bosphorus 4Ksrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMopenvino: Weld Porosity Detection FP16-INT8 - CPUgravitymark: 1920 x 1080 - Vulkanastcenc: Exhaustiveaircrack-ng: openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUwebp2: Quality 100, Lossless Compressionwebp2: Quality 95, Compression Effort 7webp2: Quality 75, Compression Effort 7clickhouse: ABC424857.0394424002.727424785.51154.24.123854.111723838.895624087.99624.24.237911063045863.515.763514.427368621.359914.661394.79416.365889.3272572459.515.23743142.579564556.950715.515.51.38564293.895322054032.92122063.752.622284293.259.7172315600.2811393.842487735.483.73083239.751.6798.6161984638.8471.8723040307.525269882951298.52385409.712.9227.455390.5071.023.33513.952090339.932352766.6837.86911862106588.652390359.332108674.754.15427.5233413611.6919.00437.730832.0512.232165194.122201386.872894240.511.033540676.35139.21328728170.37.828475.78.77283.917.5950413100000016.7572.872614562.5466.4458.2213.6052655.126.367322177.64.522672.782.7541.871.883874142.228.0280.6757.714138.31.39432.1323.064.24382.25.2714.3093648471.05935613108.90216350.9152.7158.21243.59626631.582.88741662055682.632.068.7955.58313.27245.4111.237.06476.3134.793443623.053254207.51255.642.452214943.983356941.329.6721.072.82401.41545.435931909420046874.34466.6318492.9815.853.22470.317.2871706.973.491.82715.717116430.127532504153.094.22151.862.1417.39323.4315.4411.3919088081.58410418.6970.0773189.6695182.683557458.826.46.647.338.88848.228.315709.1549.98851.26121.82668.114.0881.60.90646744.250.540.950.020.070.156.814619.18196.714290.666315015.06897714338.999414226.053514338.31487722907822310577.520.548771.504153175.81318.853045.626818.854127.50172666572.2518.53083190.589854679.197918.318.41.38754272.57062520079.53.321042102.63620908138.8642159374.6512423.892340702.993.633061904.251.77104.2891889111.6175.2982896459.7524794133040677.252328899.182.9226.301404.7411.013.20614.022050935.62350756.5939.18512272037235.732386648.262137795.54.29421.972284648.511.8419.59451.529972.0112.592147215.12258543.62975358.2511.023451455.31137.51317593174.77.768668.78.63286.5217.4351513250000017.1142.842657219.5476.2467.2215.6482642.166.253328180.84.542664.552.7261.871.893872125.6528.1280.6857.2951391.4433.2326.464.383.15.3414.2083695033.82923849109.36216555.1151.1156.47242.05726931.922.91737662047918.952.068.7435.62413.2247.629110.9236.84474.7135.593472844.583249343.51257.942.442197972.693381977.879.6321.042.84401.91549.545896705520103194.32465.82.9818397.3315.863.21473.217.18317173.51.81713.57125529.973531028152.884.22151.912.1517.42322.9415.4611.3419126781.63510458.3369.8981188.9869182.063569362.466.396.667.358.8668.218.319710.55549.30351.3511.991.8276814.0781.70.90746755.50.540.950.020.070.157.114095.2686.913959.605914425.22017.2714014.678814013.883614207.53577.17.12412006.52362879.7520.349119.61152930.940418.949519.404720.253181.63832154450.518.83257175.2595654268.934718.417.81.64256192.64152592197.53.32359818.52.92522944618.9232118152.5612274.152319743.013.892888509.751.72102.5051884511.6375.4733007572.75259532130874282432589.273.0527.466407.2540.983.22413.482013444.962437747.1638.79812242057755.242312114.4221799234.16414.122261308.512.0618.99441.629911.9912.242209969.442265720.822962262.511.323544440.03135.61294753171.77.968462.78.83290.217.2151413380000016.9612.92669562473.7463.7217.7342605.16.249328180.64.62626.332.7711.91.913931937.3927.7130.6858.136140.31.41438.2321.964.25883.35.2714.3933680126.34929842110.27916500.1150.9158.28244.82826931.572.91744932035828.742.088.7125.57313.15246.907110.236.74472.2135.953467573.223276813.51266.192.462210565.023361979.089.621.192.83399.081538.695938280020186484.31463.4318514.4615.953.23473.117.2831717.013.511.82711.867154430.116529808152.364.20151.22.1417.47321.9615.5111.3419042881.93710439.9269.8139189.2416182.493559519.926.386.667.358.87748.28.334709.01548.89751.2911.981.82968.114.0981.60.906546766.0310.540.950.020.070.15OpenBenchmarking.org

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 50 - Clients: 100 - Average LatencyABC2468104.06.87.1

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 50 - Clients: 100ABC5K10K15K20K25K24857.0414619.1814095.27

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 50 - Clients: 100 - Average LatencyABC2468104.06.76.9

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 100ABC5K10K15K20K25K24002.7314290.6713959.61

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 50 - Clients: 100ABC5K10K15K20K25K24785.5115015.0714425.22

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 100 - Average LatencyABC2468104.27.07.2

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 100 - Average LatencyABC2468104.17.07.0

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 100ABC5K10K15K20K25K23854.1114339.0014014.68

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 100ABC5K10K15K20K25K23838.9014226.0514013.88

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 100ABC5K10K15K20K25K24088.0014338.3114207.54

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 100 - Average LatencyABC2468104.27.07.1

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 100 - Average LatencyABC2468104.27.07.1

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 50ACB800K1600K2400K3200K4000K3791106.02412006.52290782.01. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 500ACB700K1400K2100K2800K3500K3045863.502362879.752310577.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 1000 - Average LatencyACB51015202515.720.320.5

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 500 - Clients: 1000ACB14K28K42K56K70K63514.4349119.6148771.50

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 50 - Clients: 1000ABC15K30K45K60K75K68621.3653175.8152930.94

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 50 - Clients: 1000 - Average LatencyABC51015202514.618.818.9

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 1000ABC13K26K39K52K65K61394.7953045.6349519.40

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 500 - Clients: 1000 - Average LatencyABC51015202516.318.820.2

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 1000ABC14K28K42K56K70K65889.3354127.5053181.64

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000BAC600K1200K1800K2400K3000K2666572.252572459.502154450.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: RANGE - Connections: 100 - Clients: 1000 - Average LatencyABC51015202515.218.518.8

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50ACB800K1600K2400K3200K4000K3743142.503257175.253083190.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCBA20040060080010009568987951. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 1000ABC14K28K42K56K70K64556.9554679.2054268.93

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 100 - Clients: 1000 - Average LatencyABC51015202515.518.318.4

OpenBenchmarking.orgms, Fewer Is Betteretcd 3.5.4Test: PUT - Connections: 50 - Clients: 1000 - Average LatencyACB51015202515.517.818.4

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3ABC0.36950.7391.10851.4781.84751.3851.3871.642MIN: 1.36 / MAX: 1.84MIN: 1.33 / MAX: 1.83MIN: 1.38 / MAX: 2.11. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

etcd

Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests/sec, More Is Betteretcd 3.5.4Test: PUT - Connections: 50 - Clients: 1000ACB14K28K42K56K70K64293.9056192.6454272.57

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50CBA600K1200K1800K2400K3000K2592197.52520079.52205403.01. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCBA0.74251.4852.22752.973.71253.33.32.9

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 1000CAB500K1000K1500K2000K2500K2359818.502122063.752104210.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224ABC0.65811.31621.97432.63243.29052.6202.6362.925MIN: 2.58 / MAX: 2.86MIN: 2.57 / MAX: 2.91MIN: 2.62 / MAX: 3.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 500CAB500K1000K1500K2000K2500K2294461.002284293.252090813.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetBCA36912158.8648.9239.717MIN: 8.81 / MAX: 9.67MIN: 8.72 / MAX: 9.47MIN: 8.77 / MAX: 19.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1ABC500K1000K1500K2000K2500K2315600.282159374.652118152.561. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceBCA300600900120015001242122711391. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2ABC0.93381.86762.80143.73524.6693.843.894.15MIN: 3.68 / MAX: 4.18MIN: 3.67 / MAX: 4.09MIN: 3.59 / MAX: 4.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ABC500K1000K1500K2000K2500K2487735.482340702.992319743.011. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetBAC0.87531.75062.62593.50124.37653.633.703.89MIN: 3.49 / MAX: 3.77MIN: 3.53 / MAX: 11.15MIN: 3.84 / MAX: 4.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 50ABC700K1400K2100K2800K3500K3083239.753061904.252888509.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessBCA0.39830.79661.19491.59321.99151.771.721.671. (CC) gcc options: -fvisibility=hidden -O2 -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KBCA20406080100104.29102.5198.621. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000K1984638.841889111.611884511.631. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCBA2040608010075.4775.3071.871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 1000ACB700K1400K2100K2800K3500K3040307.503007572.752896459.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingCAB600K1200K1800K2400K3000K2595321252698824794131. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000CBA700K1400K2100K2800K3500K3087428.003040677.252951298.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CAB500K1000K1500K2000K2500K2432589.272385409.712328899.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2ABC0.68631.37262.05892.74523.43152.922.923.05MIN: 2.88 / MAX: 3MIN: 2.76 / MAX: 3.1MIN: 2.83 / MAX: 3.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3BAC61218243026.3027.4627.47MIN: 25.95 / MAX: 35.77MIN: 26.79 / MAX: 35.91MIN: 26.64 / MAX: 36.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pCBA90180270360450407.25404.74390.511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceCBA0.22950.4590.68850.9181.14750.981.011.02MIN: 0.95 / MAX: 1.19MIN: 0.95 / MAX: 1.15MIN: 0.95 / MAX: 1.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0BCA0.75041.50082.25123.00163.7523.2063.2243.335MIN: 3.04 / MAX: 5.32MIN: 3.06 / MAX: 3.71MIN: 3.04 / MAX: 3.751. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkBAC4812162014.0213.9513.48

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1ABC400K800K1200K1600K2000K2090339.932050935.602013444.961. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10CAB500K1000K1500K2000K2500K2437747.162352766.682350756.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KBCA91827364539.1938.8037.871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingBCA300600900120015001227122411861. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1ACB500K1000K1500K2000K2500K2106588.652057755.242037235.731. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ABC500K1000K1500K2000K2500K2390359.332386648.262312114.421. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 50CBA500K1000K1500K2000K2500K2179923.002137795.502108674.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionBCA0.96531.93062.89593.86124.82654.294.164.151. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC90180270360450427.50421.97414.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 1000ABC500K1000K1500K2000K2500K2334136.02284648.52261308.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUABC369121511.6911.8412.06MIN: 9.97 / MAX: 21.3MIN: 8.46 / MAX: 16.95MIN: 10.31 / MAX: 17.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultBAC51015202519.5919.0018.991. (CC) gcc options: -fvisibility=hidden -O2 -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMBCA100200300400500451.5441.6437.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Fill SyncABC70014002100280035003083299729911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetCBA0.46130.92261.38391.84522.30651.992.012.05MIN: 1.75 / MAX: 2.58MIN: 1.78 / MAX: 2.59MIN: 1.77 / MAX: 2.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100BCA369121512.5912.2412.231. (CC) gcc options: -fvisibility=hidden -O2 -lm

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1CAB500K1000K1500K2000K2500K2209969.442165194.122147215.101. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10CBA500K1000K1500K2000K2500K2265720.822258543.602201386.871. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 500BCA600K1200K1800K2400K3000K2975358.252962262.502894240.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetBAC369121511.0211.0311.32MIN: 10.86 / MAX: 11.18MIN: 10.86 / MAX: 11.16MIN: 11.22 / MAX: 11.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1CAB800K1600K2400K3200K4000K3544440.033540676.353451455.311. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMABC306090120150139.2137.5135.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Sequential FillABC300K600K900K1200K1500K1328728131759312947531. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMBCA4080120160200174.7171.7170.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultCAB2468107.967.827.761. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleBAC2K4K6K8K10K8668.78475.78462.71. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mBAC2468108.638.778.83MIN: 8.33 / MAX: 9.25MIN: 8.73 / MAX: 9.34MIN: 8.69 / MAX: 9.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCBA60120180240300290.20286.52283.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCBA4812162017.2117.4317.59MIN: 9.88 / MAX: 23.22MIN: 11.15 / MAX: 25.56MIN: 10.91 / MAX: 35.81. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlBCA1102203304405505155145041. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestCBA30M60M90M120M150M1338000001325000001310000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12ACB4812162016.7616.9617.111. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetBAC0.65251.3051.95752.613.26252.842.872.90MIN: 2.68 / MAX: 3.05MIN: 2.74 / MAX: 2.97MIN: 2.88 / MAX: 3.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CBA600K1200K1800K2400K3000K2669562.02657219.52614562.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMBCA100200300400500476.2473.7466.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighBCA100200300400500467.2463.7458.2

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13ABC50100150200250213.61215.65217.731. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCBA60012001800240030002605.102642.162655.12MIN: 2276.75 / MAX: 2818.52MIN: 2513.38 / MAX: 2839.77MIN: 2440.45 / MAX: 2879.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCBA2468106.2496.2536.367

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCBA701402102803503283283221. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMBCA4080120160200180.8180.6177.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetABC1.0352.073.1054.145.1754.524.544.60MIN: 4.49 / MAX: 6.73MIN: 4.51 / MAX: 4.99MIN: 4.49 / MAX: 10.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCBA60012001800240030002626.332664.552672.78MIN: 2335.86 / MAX: 2893.56MIN: 2335.6 / MAX: 2869.01MIN: 2490.71 / MAX: 2855.591. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1BAC0.62351.2471.87052.4943.11752.7262.7542.771MIN: 2.63 / MAX: 2.97MIN: 2.68 / MAX: 3.41MIN: 2.68 / MAX: 3.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCBA0.42750.8551.28251.712.13751.901.871.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCBA0.42980.85961.28941.71922.1491.911.891.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5CAB800K1600K2400K3200K4000K3931937.393874142.203872125.651. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CAB71421283527.7128.0328.13MIN: 26.96 / MAX: 34.92MIN: 27.18 / MAX: 36.16MIN: 27.05 / MAX: 36.611. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionCBA0.1530.3060.4590.6120.7650.680.680.671. (CC) gcc options: -fvisibility=hidden -O2 -lm

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileBAC132639526557.3057.7158.141. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCBA306090120150140.3139.0138.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceABC0.31730.63460.95191.26921.58651.391.401.41MIN: 1.37 / MAX: 1.67MIN: 1.38 / MAX: 1.72MIN: 1.4 / MAX: 1.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCBA90180270360450438.2433.2432.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyCAB70140210280350321.96323.06326.46

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0ACB0.96751.9352.90253.874.83754.2434.2584.300MIN: 4.18 / MAX: 4.55MIN: 4.16 / MAX: 4.4MIN: 4.26 / MAX: 13.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

GravityMark

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.70Resolution: 1920 x 1080 - Renderer: OpenGLCBA2040608010083.383.182.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0ACB1.20152.4033.60454.8066.00755.275.275.34MIN: 5.12 / MAX: 5.37MIN: 5.12 / MAX: 5.39MIN: 5.19 / MAX: 5.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACBAC4812162014.2114.3114.391. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5BCA800K1600K2400K3200K4000K3695033.823680126.343648471.051. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random FillACB200K400K600K800K1000K9356139298429238491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pCBA20406080100110.28109.36108.901. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleBCA4K8K12K16K20K16555.116500.116350.91. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMABC306090120150152.7151.1150.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUBAC306090120150156.47158.21158.28MIN: 78.86 / MAX: 170.93MIN: 125.41 / MAX: 171.57MIN: 82.56 / MAX: 178.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedBAC50100150200250242.06243.60244.83

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCBA601201802403002692692661. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUBAC71421283531.9231.5831.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3ABC0.65481.30961.96442.61923.2742.882.912.91MIN: 2.75 / MAX: 2.98MIN: 2.86 / MAX: 3.04MIN: 2.74 / MAX: 3.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCAB16K32K48K64K80K7449374166737661. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1ABC400K800K1200K1600K2000K2055682.632047918.952035828.741. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2ABC0.4680.9361.4041.8722.342.062.062.08MIN: 2.02 / MAX: 2.65MIN: 2.05 / MAX: 2.28MIN: 2.06 / MAX: 2.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinABC2468108.7958.7438.7121. (CXX) g++ options: -O3 -lm -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pBAC1.26542.53083.79625.06166.3275.6245.5835.5731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetCBA369121513.1513.2013.27MIN: 12.69 / MAX: 13.73MIN: 13.06 / MAX: 13.59MIN: 12.72 / MAX: 23.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pBCA50100150200250247.63246.91245.401. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyCBA20406080100110.20110.92111.20

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCBA91827364536.7436.8437.06MIN: 23.41 / MAX: 60.43MIN: 13.25 / MAX: 61.62MIN: 22.05 / MAX: 60.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: MediumABC100200300400500476.3474.7472.2

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCBA306090120150135.95135.59134.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1BCA700K1400K2100K2800K3500K3472844.583467573.223443623.051. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CAB700K1400K2100K2800K3500K3276813.53254207.53249343.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyABC300600900120015001255.641257.941266.19

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3BAC0.55351.1071.66052.2142.76752.442.452.46MIN: 2.43 / MAX: 2.86MIN: 2.44 / MAX: 2.58MIN: 2.44 / MAX: 2.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5ACB500K1000K1500K2000K2500K2214943.982210565.022197972.691. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1BCA700K1400K2100K2800K3500K3381977.873361979.083356941.321. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18CBA36912159.609.639.67MIN: 9.37 / MAX: 9.89MIN: 9.52 / MAX: 9.96MIN: 9.41 / MAX: 9.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyBAC51015202521.0421.0721.19MIN: 20.78 / MAX: 21.33MIN: 20.94 / MAX: 21.43MIN: 20.94 / MAX: 21.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18ACB0.6391.2781.9172.5563.1952.822.832.84MIN: 2.81 / MAX: 2.94MIN: 2.81 / MAX: 2.94MIN: 2.82 / MAX: 3.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyCAB90180270360450399.08401.40401.90

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCAB300600900120015001538.691545.431549.54MIN: 1332.94 / MAX: 1653.3MIN: 1359.94 / MAX: 1643.65MIN: 1386.67 / MAX: 1649.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadCAB13M26M39M52M65M5938280059319094589670551. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomCBA400K800K1200K1600K2000K2018648201031920046871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdCBA0.97651.9532.92953.9064.88254.314.324.34MIN: 4.29 / MAX: 4.57MIN: 4.3 / MAX: 4.48MIN: 4.32 / MAX: 4.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraABC100200300400500466.6465.8463.4

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetBAC0.6751.352.0252.73.3752.983.003.00MIN: 2.96 / MAX: 3.18MIN: 2.98 / MAX: 3.12MIN: 2.97 / MAX: 3.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCAB4K8K12K16K20K18514.4618492.9818397.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdABC4812162015.8515.8615.95MIN: 15.66 / MAX: 16.14MIN: 15.68 / MAX: 16.1MIN: 15.85 / MAX: 16.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCAB0.72681.45362.18042.90723.6343.233.223.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMBCA100200300400500473.2473.1470.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultBCA4812162017.1817.2817.29

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenCBA40801201602001711711701. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCBA2468107.017.006.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mABC0.78981.57962.36943.15923.9493.493.503.51MIN: 3.48 / MAX: 3.62MIN: 3.49 / MAX: 3.62MIN: 3.49 / MAX: 3.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2BAC0.40950.8191.22851.6382.04751.811.821.82MIN: 1.79 / MAX: 2.69MIN: 1.8 / MAX: 2.53MIN: 1.8 / MAX: 2.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCBA150300450600750711.86713.50715.71MIN: 705.87 / MAX: 727.15MIN: 698.71 / MAX: 727.73MIN: 699.58 / MAX: 736.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingCBA15K30K45K60K75K7154471255711641. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileBCA71421283529.9730.1230.13

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomABC110K220K330K440K550K5325045310285298081. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerCBA306090120150152.36152.88153.09MIN: 151.95 / MAX: 153.56MIN: 152.22 / MAX: 162.31MIN: 152.67 / MAX: 155.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5BAC0.94951.8992.84853.7984.74754.224.224.201. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyCAB306090120150151.20151.86151.91

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetACB0.48380.96761.45141.93522.4192.142.142.15MIN: 2.11 / MAX: 2.98MIN: 2.12 / MAX: 2.56MIN: 2.12 / MAX: 2.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50ABC4812162017.3917.4217.47MIN: 17.11 / MAX: 20.45MIN: 17.18 / MAX: 17.89MIN: 16.85 / MAX: 17.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC70140210280350323.43322.94321.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUABC4812162015.4415.4615.51MIN: 7.87 / MAX: 25.19MIN: 14.1 / MAX: 20.36MIN: 8.18 / MAX: 23.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyBCA369121511.3411.3411.39MIN: 11.29 / MAX: 11.53MIN: 11.27 / MAX: 11.53MIN: 11.3 / MAX: 18.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricBAC40K80K120K160K200K1912671908801904281. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileABC2040608010081.5881.6481.94

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUBCA2K4K6K8K10K10458.3310439.9210418.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumABC163248648070.0869.9069.811. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastACB4080120160200189.67189.24188.991. (CXX) g++ options: -O3 -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerBCA4080120160200182.06182.49182.68MIN: 176.52 / MAX: 193.89MIN: 177.79 / MAX: 189.95MIN: 178.04 / MAX: 188.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1BCA800K1600K2400K3200K4000K3569362.463559519.923557458.821. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50CBA2468106.386.396.40MIN: 6.35 / MAX: 6.53MIN: 6.36 / MAX: 6.52MIN: 6.38 / MAX: 6.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0ABC2468106.646.666.66MIN: 6.6 / MAX: 6.79MIN: 6.61 / MAX: 6.78MIN: 6.61 / MAX: 6.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetABC2468107.337.357.35MIN: 7.21 / MAX: 7.4MIN: 7.2 / MAX: 7.45MIN: 7.21 / MAX: 7.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughACB2468108.88848.87748.86601. (CXX) g++ options: -O3 -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetCBA2468108.208.218.22MIN: 8.16 / MAX: 8.38MIN: 8.16 / MAX: 8.36MIN: 8.17 / MAX: 10.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCBA2468108.3348.3198.3151. (CXX) g++ options: -O3 -lm -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUBAC150300450600750710.55709.10709.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileCBA120240360480600548.90549.30549.99

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16ACB122436486051.2651.2951.35MIN: 51.13 / MAX: 51.63MIN: 51.16 / MAX: 51.82MIN: 51.15 / MAX: 58.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16CBA369121511.9811.9912.00MIN: 11.72 / MAX: 13.9MIN: 11.71 / MAX: 13.92MIN: 11.73 / MAX: 13.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCBA0.41150.8231.23451.6462.05751.8291.8271.8261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMCAB153045607568.168.168.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUBAC4812162014.0714.0814.09MIN: 7.42 / MAX: 23.27MIN: 10.77 / MAX: 22.42MIN: 10.77 / MAX: 23.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

GravityMark

OpenBenchmarking.orgFrames Per Second, More Is BetterGravityMark 1.70Resolution: 1920 x 1080 - Renderer: VulkanBCA2040608010081.781.681.6

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveBCA0.20410.40820.61230.81641.02050.90700.90650.90601. (CXX) g++ options: -O3 -flto -pthread

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7CBA10K20K30K40K50K46766.0346755.5046744.251. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUABC0.12150.2430.36450.4860.60750.540.540.54MIN: 0.33 / MAX: 9.27MIN: 0.33 / MAX: 3.44MIN: 0.32 / MAX: 3.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUABC0.21380.42760.64140.85521.0690.950.950.95MIN: 0.56 / MAX: 2.58MIN: 0.55 / MAX: 2.58MIN: 0.56 / MAX: 11.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless CompressionCBA0.00450.0090.01350.0180.02250.020.020.021. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7CBA0.01580.03160.04740.06320.0790.070.070.071. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7CBA0.03380.06760.10140.13520.1690.150.150.151. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

A: The test quit with a non-zero exit status. E: 2022-09-09 23:43:29.505523: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory

B: The test quit with a non-zero exit status. E: 2022-09-10 08:33:39.435305: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory

C: The test quit with a non-zero exit status. E: 2022-09-10 13:41:52.270342: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

A: The test quit with a non-zero exit status.

B: The test quit with a non-zero exit status.

C: The test quit with a non-zero exit status.

200 Results Shown

etcd:
  RANGE - 50 - 100 - Average Latency
  RANGE - 50 - 100
  PUT - 50 - 100 - Average Latency
  RANGE - 100 - 100
  PUT - 50 - 100
  RANGE - 100 - 100 - Average Latency
  PUT - 100 - 100 - Average Latency
  PUT - 500 - 100
  RANGE - 500 - 100
  PUT - 100 - 100
  RANGE - 500 - 100 - Average Latency
  PUT - 500 - 100 - Average Latency
Redis:
  LPOP - 50
  LPOP - 500
etcd:
  RANGE - 500 - 1000 - Average Latency
  RANGE - 500 - 1000
  RANGE - 50 - 1000
  RANGE - 50 - 1000 - Average Latency
  PUT - 500 - 1000
  PUT - 500 - 1000 - Average Latency
  RANGE - 100 - 1000
Redis
etcd
Redis
GraphicsMagick
etcd:
  PUT - 100 - 1000
  PUT - 100 - 1000 - Average Latency
  PUT - 50 - 1000 - Average Latency
Mobile Neural Network
etcd
Redis
Natron
Redis
Mobile Neural Network
Redis
Mobile Neural Network
memtier_benchmark
GraphicsMagick
NCNN
memtier_benchmark
NCNN
Redis
WebP Image Encode
SVT-AV1
memtier_benchmark
SVT-AV1
Redis
Facebook RocksDB
Redis
memtier_benchmark
NCNN
Mobile Neural Network
SVT-AV1
NCNN
Mobile Neural Network
Node.js V8 Web Tooling Benchmark
memtier_benchmark:
  Redis - 100 - 5:1
  Redis - 100 - 1:10
SVT-AV1
GraphicsMagick
memtier_benchmark:
  Redis - 50 - 5:1
  Redis - 100 - 1:5
Redis
WebP Image Encode
OpenVINO
Redis
OpenVINO
WebP Image Encode
srsRAN
Facebook RocksDB
NCNN
WebP Image Encode
memtier_benchmark:
  Redis - 100 - 1:1
  Redis - 500 - 1:10
Redis
NCNN
Dragonflydb
srsRAN
Facebook RocksDB
srsRAN
WebP2 Image Encode
C-Blosc
NCNN
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
GraphicsMagick
srsRAN
Primesieve
NCNN
Redis
srsRAN
Unvanquished
Primesieve
OpenVINO
Unpacking The Linux Kernel
GraphicsMagick
srsRAN
NCNN
OpenVINO
Mobile Neural Network
OpenVINO:
  Person Detection FP16 - CPU
  Person Detection FP32 - CPU
Dragonflydb
Mobile Neural Network
WebP Image Encode
Timed Wasmer Compilation
srsRAN
NCNN
srsRAN
Blender
Mobile Neural Network
GravityMark
NCNN
FLAC Audio Encoding
Dragonflydb
Facebook RocksDB
SVT-AV1
C-Blosc
srsRAN
OpenVINO
Timed CPython Compilation
GraphicsMagick
OpenVINO
NCNN
7-Zip Compression
memtier_benchmark
NCNN
LAMMPS Molecular Dynamics Simulator
SVT-AV1
NCNN
SVT-AV1
Blender
OpenVINO
Unvanquished
OpenVINO
Dragonflydb
Redis
Blender
NCNN
memtier_benchmark
Dragonflydb
NCNN:
  CPU - resnet18
  CPU - yolov4-tiny
  Vulkan GPU - resnet18
Blender
OpenVINO
Facebook RocksDB:
  Rand Read
  Read Rand Write Rand
NCNN
Unvanquished
NCNN
OpenVINO
NCNN
OpenVINO
srsRAN
Timed CPython Compilation
GraphicsMagick
OpenVINO
NCNN:
  Vulkan GPU - regnety_400m
  Vulkan GPU - shufflenet-v2
OpenVINO
7-Zip Compression
Timed MPlayer Compilation
Facebook RocksDB
NCNN
WebP2 Image Encode
Blender
NCNN:
  Vulkan GPU - mnasnet
  CPU - resnet50
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
NCNN
BRL-CAD
Timed Erlang/OTP Compilation
OpenVINO
ASTC Encoder:
  Medium
  Fast
NCNN
Dragonflydb
NCNN:
  Vulkan GPU - resnet50
  Vulkan GPU - efficientnet-b0
  CPU - alexnet
ASTC Encoder
NCNN
LAMMPS Molecular Dynamics Simulator
OpenVINO
Timed Node.js Compilation
NCNN:
  CPU - vgg16
  Vulkan GPU - vgg16
SVT-AV1
srsRAN
OpenVINO
GravityMark
ASTC Encoder
Aircrack-ng
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
WebP2 Image Encode:
  Quality 100, Lossless Compression
  Quality 95, Compression Effort 7
  Quality 75, Compression Effort 7