Retbleed Call Depth Tracking Skylake

Call Depth Tracking mitigation benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2212262-NE-RETBLEEDS60
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 3 Tests
C++ Boost Tests 2 Tests
Web Browsers 1 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 9 Tests
CPU Massive 21 Tests
Creator Workloads 16 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Database Test Suite 11 Tests
Encoding 5 Tests
Game Development 2 Tests
HPC - High Performance Computing 10 Tests
Imaging 6 Tests
Java 2 Tests
Common Kernel Benchmarks 7 Tests
Machine Learning 6 Tests
Multi-Core 18 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 8 Tests
Renderers 2 Tests
Software Defined Radio 2 Tests
Server 12 Tests
Server CPU Tests 15 Tests
Single-Threaded 3 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Default - IBRS
December 20 2022
  1 Day, 20 Hours, 11 Minutes
retbleed=stuff
December 22 2022
  1 Day, 21 Hours, 58 Minutes
mitigations=off
December 23 2022
  1 Day, 19 Hours, 48 Minutes
retbleed=off
December 25 2022
  1 Day, 19 Hours, 29 Minutes
Invert Hiding All Results Option
  1 Day, 20 Hours, 21 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Retbleed Call Depth Tracking SkylakeOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon E3-1280 v5 @ 4.00GHz (4 Cores / 8 Threads)MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS)Intel Xeon E3-1200 v5/E3-150032GB256GB TOSHIBA RD400ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SPRealtek ALC1150VA2431Intel I219-VUbuntu 20.046.1.0-phx (x86_64)GNOME Shell 3.36.9X Server 1.20.134.5 Mesa 21.2.6 (LLVM 12.0.0)GCC 9.4.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionRetbleed Call Depth Tracking Skylake BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 1.9.1 - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu220.04)- Python 3.8.10- Default - IBRS: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled - retbleed=stuff: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Stuffing + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled - mitigations=off: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: vulnerable + mds: Vulnerable; SMT vulnerable + meltdown: Vulnerable + mmio_stale_data: Vulnerable + retbleed: Vulnerable + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled PBRSB-eIBRS: Not affected + srbds: Vulnerable + tsx_async_abort: Mitigation of TSX disabled - retbleed=off: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled

Default - IBRSretbleed=stuffmitigations=offretbleed=offLogarithmic Result OverviewPhoronix Test SuiteHackbenchctx_clockSockperfPostMarkKeyDBNatronStress-NGOSBenchMemcachedEnCodecnginxJPEG XL Decoding libjxlFacebook RocksDBJPEG XL libjxlDragonflydbMobile Neural Networkmemtier_benchmarkTimed Erlang/OTP CompilationTimed Linux Kernel CompilationGraphicsMagickRenaissancePostgreSQLNCNNClickHouseTimed PHP CompilationMariaDBRedisDaCapo BenchmarkCockroachDBSeleniumTimed Node.js CompilationUnpacking The Linux KernelApache SparkTimed CPython Compilationlibavif avifencLuaRadioSVT-AV1OpenRadioss7-Zip CompressiononeDNNFLAC Audio EncodingNumenta Anomaly Benchmarkrav1eOpenFOAMWebP2 Image EncodeNeural Magic DeepSparseBRL-CADWebP Image EncodeStargate Digital Audio WorkstationCpuminer-OptOpenVINOminiBUDEsrsRANBlenderSMHashernekRSXmrigOpenVKL

Retbleed Call Depth Tracking Skylakebuild-nodejs: Time To Compileopenradioss: INIVOL and Fluid Structure Interaction Drop Containernekrs: TurboPipe Periodicwebp2: Quality 95, Compression Effort 7mysqlslap: 16mysqlslap: 8renaissance: Apache Spark ALSopenradioss: Bird Strike on Windshieldxmrig: Monero - 1Msmhasher: SHA3-256smhasher: SHA3-256openradioss: Rubber O-Ring Seal Installationspark: 1000000 - 500 - Broadcast Inner Join Test Timespark: 1000000 - 500 - Inner Join Test Timespark: 1000000 - 500 - Repartition Test Timespark: 1000000 - 500 - Group By Test Timespark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 500 - Calculate Pi Benchmarkspark: 1000000 - 500 - SHA-512 Benchmark Timewebp2: Quality 75, Compression Effort 7xmrig: Wownero - 1Mspark: 1000000 - 100 - Broadcast Inner Join Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - SHA-512 Benchmark Timejpegxl: JPEG - 100renaissance: ALS Movie Lensjpegxl: PNG - 100numenta-nab: KNN CADclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacherenaissance: Apache Spark Bayesnatron: Spaceshipopenradioss: Bumper Beamopenvkl: vklBenchmark Scalaropenvkl: vklBenchmark ISPCavifenc: 0blender: BMW27 - CPU-Onlyopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timehackbench: 16 - Threadopenradioss: Cell Phone Drop Testbuild-linux-kernel: defconfignumenta-nab: Earthgecko Skylinerenaissance: Akka Unbalanced Cobwebbed Treemnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetrenaissance: Scala Dottymemtier-benchmark: Redis - 50 - 1:1brl-cad: VGR Performance Metricluaradio: Complex Phaseluaradio: Hilbert Transformluaradio: FM Deemphasis Filterluaradio: Five Back to Back FIR Filtersselenium: Jetstream 2 - Firefoxncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetjpegxl: JPEG - 80build-erlang: Time To Compilejpegxl: PNG - 80stargate: 96000 - 512memtier-benchmark: Redis - 50 - 10:1cockroach: MoVR - 256jpegxl: JPEG - 90stargate: 96000 - 1024avifenc: 2jpegxl: PNG - 90minibude: OpenMP - BM1minibude: OpenMP - BM1hackbench: 16 - Processstress-ng: Atomicsvt-av1: Preset 4 - Bosphorus 4Kpgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Writepgbench: 100 - 1 - Read Only - Average Latencypgbench: 100 - 1 - Read Onlypgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Onlypgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 1 - Read Writebuild-python: Released Build, PGO + LTO Optimizedstress-ng: Semaphoresrenaissance: Apache Spark PageRankbuild-php: Time To Compilememtier-benchmark: Redis - 50 - 1:10stargate: 480000 - 512pgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Onlypgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlypgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 1 - Read Writepgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Onlystargate: 44100 - 512renaissance: Savina Reactors.IOstargate: 480000 - 1024stargate: 44100 - 1024renaissance: Genetic Algorithm Using Jenetics + Futurescockroach: KV, 50% Reads - 512cockroach: KV, 60% Reads - 512cockroach: KV, 10% Reads - 512cockroach: KV, 95% Reads - 512cockroach: KV, 10% Reads - 256cockroach: KV, 60% Reads - 256cockroach: KV, 50% Reads - 256cockroach: KV, 95% Reads - 256cockroach: KV, 50% Reads - 128cockroach: KV, 10% Reads - 128cockroach: KV, 60% Reads - 128cockroach: KV, 95% Reads - 128onednn: Recurrent Neural Network Training - f32 - CPUhackbench: 4 - Threadhackbench: 2 - Threadcockroach: MoVR - 512cockroach: MoVR - 128keydb: numenta-nab: Contextual Anomaly Detector OSEnginx: 4000nginx: 100nginx: 1000nginx: 500nginx: 200nginx: 20onednn: Recurrent Neural Network Inference - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUrenaissance: In-Memory Database Shootouthackbench: 8 - Threadjpegxl-decode: 1hackbench: 8 - Processrocksdb: Rand Fill Syncmemtier-benchmark: Redis - 100 - 10:1memtier-benchmark: Redis - 100 - 1:1openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUdacapobench: Tradebeansopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUdragonflydb: 200 - 5:1dragonflydb: 200 - 1:5dragonflydb: 200 - 1:1memtier-benchmark: Redis - 100 - 1:10dragonflydb: 50 - 1:5dragonflydb: 50 - 5:1openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUencodec: 24 kbpsmemcached: 1:10memcached: 1:1memcached: 1:5memcached: 5:1rocksdb: Rand Readopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUgraphics-magick: Sharpendeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUgraphics-magick: Swirlrocksdb: Read While Writinggraphics-magick: Enhancedgraphics-magick: Noise-Gaussianrocksdb: Rand Fillopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUrocksdb: Update Randrocksdb: Read Rand Write Randgraphics-magick: Resizinggraphics-magick: Rotategraphics-magick: HWB Color Spacedeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streampostmark: Disk Transaction Performancewebp: Quality 100, Lossless, Highest Compressionencodec: 6 kbpsencodec: 3 kbpsdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamencodec: 1.5 kbpsdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamrenaissance: Finagle HTTP Requestsstress-ng: MMAPdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMselenium: PSPDFKit WASM - Firefoxdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamjpegxl-decode: Allsvt-av1: Preset 8 - Bosphorus 4Ksrsran: OFDM_Testrenaissance: Rand Forestsvt-av1: Preset 4 - Bosphorus 1080psrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMcpuminer-opt: Quad SHA-256, Pyritedacapobench: H2deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamsockperf: Throughputstress-ng: Memory Copyingcpuminer-opt: Ringcoindeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamhackbench: 4 - Processcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingencode-flac: WAV To FLACselenium: Kraken - Firefoxcpuminer-opt: Magirav1e: 5stress-ng: Matrix Mathstress-ng: x86_64 RdRandstress-ng: Context Switchingstress-ng: NUMAstress-ng: Mallocstress-ng: Socket Activitystress-ng: MEMFDstress-ng: Glibc C String Functionsstress-ng: System V Message Passingstress-ng: CPU Stressstress-ng: Glibc Qsort Data Sortingstress-ng: Futexstress-ng: CPU Cachestress-ng: SENDFILEstress-ng: Cryptostress-ng: Vector Mathstress-ng: Mutexstress-ng: Forkingcpuminer-opt: LBC, LBRY Creditscpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: scryptcpuminer-opt: Garlicoincpuminer-opt: x25xcpuminer-opt: Myriad-Groestlcpuminer-opt: Deepcoincpuminer-opt: Skeincoincpuminer-opt: Blake-2 Shackbench: 1 - Threadhackbench: 2 - Processsmhasher: t1ha2_atoncesmhasher: t1ha2_atonceavifenc: 6, Losslessredis: SET - 50smhasher: FarmHash128smhasher: FarmHash128selenium: WASM collisionDetection - Firefoxrav1e: 6srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsmhasher: FarmHash32 x86_64 AVXsmhasher: FarmHash32 x86_64 AVXdacapobench: Tradesoapsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMredis: GET - 50numenta-nab: Windowed Gaussianavifenc: 6srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMwebp: Quality 100, Losslessrocksdb: Seq Fillsmhasher: wyhashsmhasher: wyhashsvt-av1: Preset 8 - Bosphorus 1080prav1e: 10hackbench: 1 - Processsmhasher: MeowHash x86_64 AES-NIsmhasher: MeowHash x86_64 AES-NIonednn: IP Shapes 1D - f32 - CPUsockperf: Latency Ping Pongsvt-av1: Preset 12 - Bosphorus 4Ksmhasher: Spooky32smhasher: Spooky32unpack-linux: linux-5.19.tar.xzselenium: WASM imageConvolute - Firefoxsvt-av1: Preset 13 - Bosphorus 4Konednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUbuild-python: Defaultavifenc: 10, Losslesssmhasher: fasthash32smhasher: fasthash32smhasher: t1ha0_aes_avx2 x86_64smhasher: t1ha0_aes_avx2 x86_64dacapobench: Jythononednn: IP Shapes 3D - f32 - CPUwebp2: Defaultwebp: Quality 100, Highest Compressiononednn: Convolution Batch Shapes Auto - f32 - CPUosbench: Create Filesosbench: Memory Allocationsosbench: Launch Programsosbench: Create Processesosbench: Create Threadsctx-clock: Context Switch Timesvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - f32 - CPUwebp: Quality 100webp: DefaultDefault - IBRSretbleed=stuffmitigations=offretbleed=off1783.4921649.05192638000000.0225828327451.4735.241429.83226.919122.26565.843.384.053.986.1424.33436.796.490.052030.73.143.583.695.4024.54441.235.900.5322195.30.55423.21979.5476.8274.873210.11.2412.013667379.218369.61916.4420589.670778221.696308.94287.902262.97613903.146.8744.5704.6226.66241.0544.4922.24016.5711142.41175178.9650086456.768.2356.7577.385.2395.38422.2813.5522.9532.5131.8411.7114.3778.3016.861.2910.185.104.215.276.5422.125.17220.8685.450.8597591109992.57108.55.050.928978171.4725.325.017125.419217.355180741.361.06412.76839160.0501993921.59046331.002998370.501998955.322188442.216699438.784244.4142.6301231773.001.181060402.584248188.2422660.05200370.9791020995.0851970.4881023941.21574110635.91.2419261.2852652797.215417.915948.613556.120258.012182.116214.115334.721820.812263.76837.814206.921848.07409.4559.16727.976108.1106.9111189.1892.52121430.7123838.5321713.7922618.3623460.5123974.253967.4114.14304432.3108.74631.47107.0926741291653.161360130.935354.180.7341955351.080.733306.201.2865694.31937232.81893182.921372487.30902480.41844039.711797.132.2278.023823970.81833888.75827558.93939991.3617559091302.5413.2039173.821811.504425.32157.8529.27136.5646.5086.0235.4113.0118.28218.5913872160481993761071.322968.921.482662.4819808964056139852044796.512910.360032750.4266.94965.367626.17383.1939625.92833.195164.156143.089213.97534310.620.39322.63353.0994324.41313.0824115.9354.4343881.759312.2294103.4312.796102366667849.93.764106.2323.019267376372.589327.5446338290850.74830.6741.351924.1767114.352217.485857.375317.423954.559536.642128.715234.808352.230221692809621.6551250.097.921.80218152.964462.86924510.2863.58388441.982586.50112.42243451.442107608.227987.2955.55830087.70158.4654816.644815.3014401.85745688.9713111.86105502669772.551140.68156.445371.743840.301636015506013.04525.05234.00712505.3828.5282315119.8351.41013059.24600.12.42653.0110.044.58020898.257114137.2359.63367879.1720.20619.938130.9328.61.3050905123.88120542.6636.0166.07312.80664.51332896.517.686444.71848.27050.02512205.369.65433.251.1235.5133237.53512.43136.3886164.1133.71339760.79445212.37002.683.0720.920029.673016107.71838893.32021147.05031725.2803172823185.795207.79114.44419.0314.161748.8681633.50192267666670.0227029523571.8732.521428.63226.727122.13559.223.344.033.745.8824.18437.776.110.052031.12.993.563.545.1824.16439.915.930.5519597.70.56410.61181.5581.3778.183177.80.7404.133667380.581368.09913.5020589.594193167.704307.34271.329264.62912739.745.8464.5014.4896.48339.5844.3582.05815.8961141.51232491.4250064458.768.7357.0576.288.6515.03421.6211.7522.5532.3131.5011.6614.2578.3816.541.119.834.853.914.996.2921.755.74207.3536.030.8567601180753.40111.95.60.926504171.5685.885.017125.432166.813182114.031.06813.32337530.052014022.37844700.9301075650.4611083085.360187438.599721201.264158.8138.0651286001.551.181143383.841261179.1022790.050201790.9031106714.9992000.4461119821.2174039115.41.2511871.2849392086.616043.616550.414251.320766.512888.417025.616052.622382.313101.07257.615153.922736.17409.0741.52121.557112.0111.7128725.8492.42722968.3625553.0522945.2924113.3324909.1125852.173968.4714.05984200.681.92734.8880.4266641319663.841447187.465337.230.7338785339.560.733287.931.21898246.15998481.69942396.751440930.091068334.75927313.861797.932.2266.186885942.92891561.78883794.561053867.9317331836303.8713.1439173.517111.524526.06153.3729.14137.2046.5385.9535.38112.9718.27218.69139759004821023897531.33031.901.462703.2221810669391040361551895.844010.432340760.4256.78155.984632.37713.1621624.35193.203254.662143.742413.91194122.222.05323.13183.0946328.24453.0464115.8353.7331982.020612.1905114.1912.924102400000834.53.776105.4321.919097353872.967427.3999402526834.84822.6741.668723.9930114.878517.405159.622816.767854.419836.735928.620034.924239.089221602850721.4371209.899.401.79917799.144492.761035968.5360.34603448.113422.60106.29243636.883337126.748039.9155.86971383.12152.9652701.664817.2514404.731111966.0113778.46105902721372.711153.43157.795400.623840.871634715590010.57719.16434.30012566.6628.0872289592.5051.40613103.28596.82.44853.1110.442.43620869.887028137.1359.03089573.8320.23419.764130.9329.21.3152016624.10320125.3836.2606.1449.87664.48232807.207.696144.27048.75650.26812211.489.64133.151.9285.4833836.7111.85236.3926164.4433.70038438.77444112.25282.713.0920.917126.49833293.20322785.82035743.42317623.7838431194189.070212.11814.47529.0914.211716.4891617.86192626000000.0226928821755.3729.941430.43270.418120.76553.463.363.903.715.7024.89436.935.890.052033.83.193.533.515.0324.00441.215.630.5618521.40.57402.72583.6083.1480.133215.20.7398.013667379.283368.7909.7385989.417796101.210306.77260.622262.70711106.743.4064.4214.2336.12435.4443.8741.79714.1941150.81481936.1150104476.368.4362.6593.790.0504.80413.1010.6122.3532.1931.2911.5614.1278.2616.290.999.604.683.744.846.1121.566.20198.2226.500.8577441288891.75111.26.050.927538170.0926.314.996124.91095.568180081.271.07013.71736450.0482059822.59244290.8231215440.4141208185.356187435.078755001.974071.4134.1861403515.541.183243380.446263179.9182780.046218880.8171224334.9612020.3951265121.2074188754.11.2517961.2613112040.716312.116635.814119.521346.112714.417086.416279.623274.712637.07068.914750.123352.37405.6023.65415.101111.6111.4195756.1593.69426090.3129875.4626356.0827769.2828879.6730466.923963.5914.90973839.944.48437.3643.5466631341587.321422407.475313.900.7439275323.790.733278.211.21939884.951037459.901004168.041485496.091108896.91979021.511798.302.2258.6041198373.901157720.201159234.391290274.4817471957303.0913.1839177.262411.271626.57150.4629.34136.2446.7885.4735.38113.0018.26218.83140735461821044929001.293056.031.472689.5325225075081741171758596.378810.374358590.4250.37249.645631.58863.1666626.48613.192348.263143.464213.93883809.826.88325.92263.0682326.92673.0587115.9353.6323781.999512.1937124.8112.980102366667819.73.788105.7322.719299363072.933527.4149690976834.50827.6941.626824.0175114.958017.392559.663516.756054.412936.741028.612334.933621.653220272905521.3871185.198.241.80218017.30194722.991563150.0169.781165429.654834.07114.07240956.566558503.538179.8555.821440190.04157.3376342.184816.6714411.602131852.0317166.22105772672372.641140.86156.265411.583858.75163531550637.95511.35834.03412500.3027.5692321265.7551.41113107.91597.02.44853.0110.442.43020829.826728136.7359.73171723.1719.97319.358130.8328.51.3168793223.89320425.0436.3626.1826.49764.48832954.047.693502.92049.63450.26512101.059.33133.252.0555.4957935.93711.62136.7996165.8033.69439559.81436512.33822.723.0920.929718.15388083.18932867.24913934.50711618.166701230191.993216.17414.50339.1014.201734.9711626.25192500000000.0227229623525.6731.861428.33232.773122.25557.083.444.023.685.8424.24436.3162329586.290.052029.02.983.523.515.0924.20439.845.690.5519637.60.56413.51782.3583.3779.463140.90.8403.773667377.401367.55910.8742290.518025142.981307.30266.835259.63512396.246.3304.5284.5156.58940.2064.4172.10216.4181215.91329151.4150358474.668.4362.3593.088.4605.05422.0111.7022.7832.4231.5011.6214.3078.3516.501.089.794.843.894.986.2621.895.94204.1076.240.8594121220692.33105.95.790.929955169.8396.065.018125.450143.008180772.461.06912.24940830.052011519.84850390.8961116020.4501112475.261190436.203728080.344017.6136.2361317556.821.181881372.963268177.3612820.049203900.8751143034.9272030.4331155881.2173229170.91.2517741.2867892069.516356.316712.814248.221086.812830.417196.316169.522888.912798.27123.915102.323189.17405.0433.50218.291111.6105.8154975.5892.94524122.0527412.8324256.3125604.0826593.5827933.183961.2613.53534037.269.70135.8468.0647011329633.771410074.305325.140.7439235332.520.733285.231.21928724.151033004.97987881.721499557.641085041.59966283.041798.512.2261.9991057571.991035956.621044494.531147643.6117542385301.6313.2439175.971811.363825.22158.5229.14137.2147.6383.9735.41112.8818.29218.56140737668821034568791.293051.911.452722.5822819471215141065455195.681010.450048070.4253.18052.557631.70233.1660626.06343.194551.011143.408813.94423966.824.51326.19393.0656326.54313.0623115.2354.7328982.042312.1873120.7712.968102133333825.63.781105.4320.819100368673.310927.2657533313843.09820.7241.661623.9973115.129117.366759.807616.715854.391536.755128.614834.930833.163221142895321.4621199.598.311.81818087.724482.551249349.8168.26682715.713720.58151.51243127.333767497.657966.4255.731156275.48153.8458646.534812.4714384.511285481.5315910.08105802669772.611144.59157.185374.933845.26163531537938.97016.12834.00612753.0727.8542323685.5051.40912611.70604.22.43552.9109.543.30320433.256962137.2359.93096924.820.15519.487129.4327.11.3161315524.14820525.1636.3426.1828.30364.48732887.587.686533.61149.04650.02812124.079.47233.452.1715.4655036.56411.74436.3906164.7134.51839619.03423412.32532.713.0820.914520.92180189.00467674.86820238.24472419.7800001181190.252212.54414.50179.1214.25OpenBenchmarking.org

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileDefault - IBRSretbleed=stuffmitigations=offretbleed=off400800120016002000SE +/- 0.05, N = 3SE +/- 0.48, N = 3SE +/- 0.72, N = 3SE +/- 0.16, N = 31783.491748.871716.491734.97
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileDefault - IBRSretbleed=stuffmitigations=offretbleed=off30060090012001500Min: 1783.4 / Avg: 1783.49 / Max: 1783.55Min: 1748.08 / Avg: 1748.87 / Max: 1749.75Min: 1715.57 / Avg: 1716.49 / Max: 1717.92Min: 1734.67 / Avg: 1734.97 / Max: 1735.23

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerDefault - IBRSretbleed=stuffmitigations=offretbleed=off400800120016002000SE +/- 6.44, N = 3SE +/- 4.35, N = 3SE +/- 0.41, N = 3SE +/- 1.45, N = 31649.051633.501617.861626.25
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerDefault - IBRSretbleed=stuffmitigations=offretbleed=off30060090012001500Min: 1642.36 / Avg: 1649.05 / Max: 1661.92Min: 1628.44 / Avg: 1633.5 / Max: 1642.16Min: 1617.19 / Avg: 1617.86 / Max: 1618.59Min: 1623.44 / Avg: 1626.25 / Max: 1628.3

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicDefault - IBRSretbleed=stuffmitigations=offretbleed=off4000M8000M12000M16000M20000MSE +/- 6547518.61, N = 3SE +/- 26938345.24, N = 3SE +/- 7918543.64, N = 3SE +/- 11775822.69, N = 3192638000001922676666719262600000192500000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicDefault - IBRSretbleed=stuffmitigations=offretbleed=off3000M6000M9000M12000M15000MMin: 19250900000 / Avg: 19263800000 / Max: 19272200000Min: 19173600000 / Avg: 19226766666.67 / Max: 19260900000Min: 19247200000 / Avg: 19262600000 / Max: 19273500000Min: 19227900000 / Avg: 19250000000 / Max: 192681000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -pthread -lmpi_cxx -lmpi

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7Default - IBRSretbleed=stuffmitigations=offretbleed=off0.00450.0090.01350.0180.0225SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.020.020.020.021. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread
OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7Default - IBRSretbleed=stuffmitigations=offretbleed=off12345Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.021. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.8.2Clients: 16Default - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300SE +/- 0.33, N = 3SE +/- 0.76, N = 3SE +/- 0.38, N = 3SE +/- 0.80, N = 32582702692721. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -pthread -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.8.2Clients: 16Default - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250Min: 257.47 / Avg: 258.04 / Max: 258.6Min: 269.18 / Avg: 270.3 / Max: 271.74Min: 268.31 / Avg: 268.89 / Max: 269.61Min: 270.49 / Avg: 271.99 / Max: 273.221. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -pthread -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.8.2Clients: 8Default - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300SE +/- 0.51, N = 3SE +/- 0.28, N = 3SE +/- 0.36, N = 3SE +/- 0.43, N = 32832952882961. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -pthread -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.8.2Clients: 8Default - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250Min: 281.85 / Avg: 282.54 / Max: 283.53Min: 294.12 / Avg: 294.67 / Max: 295.07Min: 286.86 / Avg: 287.58 / Max: 288.02Min: 295.07 / Avg: 295.77 / Max: 296.561. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -pthread -lnuma -lpcre2-8 -lcrypt -laio -lz -lm -lssl -lcrypto -lpthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSDefault - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KSE +/- 171.52, N = 3SE +/- 337.23, N = 3SE +/- 84.68, N = 3SE +/- 116.97, N = 327451.423571.821755.323525.6MIN: 26970.66 / MAX: 27928.53MIN: 22927.95 / MAX: 24405.07MIN: 21475.58 / MAX: 22054.88MIN: 23177.22 / MAX: 23997.61
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSDefault - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 27153.07 / Avg: 27451.38 / Max: 27747.23Min: 23128.71 / Avg: 23571.8 / Max: 24233.72Min: 21586.06 / Avg: 21755.34 / Max: 21844.2Min: 23310.37 / Avg: 23525.61 / Max: 23712.59

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldDefault - IBRSretbleed=stuffmitigations=offretbleed=off160320480640800SE +/- 0.01, N = 3SE +/- 0.53, N = 3SE +/- 0.74, N = 3SE +/- 0.27, N = 3735.24732.52729.94731.86
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldDefault - IBRSretbleed=stuffmitigations=offretbleed=off130260390520650Min: 735.23 / Avg: 735.24 / Max: 735.25Min: 731.51 / Avg: 732.52 / Max: 733.29Min: 728.96 / Avg: 729.94 / Max: 731.4Min: 731.49 / Avg: 731.86 / Max: 732.38

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MDefault - IBRSretbleed=stuffmitigations=offretbleed=off30060090012001500SE +/- 0.41, N = 3SE +/- 0.94, N = 3SE +/- 1.27, N = 3SE +/- 1.47, N = 31429.81428.61430.41428.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000Min: 1429.2 / Avg: 1429.83 / Max: 1430.6Min: 1427.6 / Avg: 1428.63 / Max: 1430.5Min: 1429.1 / Avg: 1430.37 / Max: 1432.9Min: 1426.1 / Avg: 1428.33 / Max: 1431.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: SHA3-256Default - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500SE +/- 0.08, N = 3SE +/- 0.16, N = 3SE +/- 44.34, N = 3SE +/- 5.45, N = 33226.923226.733270.423232.771. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: SHA3-256Default - IBRSretbleed=stuffmitigations=offretbleed=off6001200180024003000Min: 3226.76 / Avg: 3226.92 / Max: 3227.05Min: 3226.45 / Avg: 3226.73 / Max: 3226.99Min: 3225.94 / Avg: 3270.42 / Max: 3359.1Min: 3226.95 / Avg: 3232.77 / Max: 3243.671. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256Default - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.18, N = 3SE +/- 0.16, N = 3SE +/- 1.74, N = 3SE +/- 0.27, N = 3122.26122.13120.76122.251. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256Default - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 121.91 / Avg: 122.26 / Max: 122.49Min: 121.9 / Avg: 122.13 / Max: 122.43Min: 117.28 / Avg: 120.76 / Max: 122.65Min: 121.71 / Avg: 122.25 / Max: 122.541. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationDefault - IBRSretbleed=stuffmitigations=offretbleed=off120240360480600SE +/- 0.10, N = 3SE +/- 0.39, N = 3SE +/- 0.05, N = 3SE +/- 0.43, N = 3565.84559.22553.46557.08
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationDefault - IBRSretbleed=stuffmitigations=offretbleed=off100200300400500Min: 565.64 / Avg: 565.84 / Max: 565.96Min: 558.46 / Avg: 559.22 / Max: 559.73Min: 553.39 / Avg: 553.46 / Max: 553.56Min: 556.24 / Avg: 557.08 / Max: 557.66

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.7741.5482.3223.0963.87SE +/- 0.05, N = 3SE +/- 0.02, N = 4SE +/- 0.08, N = 3SE +/- 0.04, N = 33.383.343.363.44
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.33 / Avg: 3.38 / Max: 3.47Min: 3.29 / Avg: 3.34 / Max: 3.37Min: 3.21 / Avg: 3.36 / Max: 3.5Min: 3.38 / Avg: 3.44 / Max: 3.52

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.91131.82262.73393.64524.5565SE +/- 0.06, N = 3SE +/- 0.04, N = 4SE +/- 0.01, N = 3SE +/- 0.04, N = 34.054.033.904.02
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Inner Join Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.94 / Avg: 4.05 / Max: 4.16Min: 3.9 / Avg: 4.03 / Max: 4.1Min: 3.88 / Avg: 3.9 / Max: 3.93Min: 3.95 / Avg: 4.02 / Max: 4.08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.89551.7912.68653.5824.4775SE +/- 0.09, N = 3SE +/- 0.02, N = 4SE +/- 0.07, N = 3SE +/- 0.08, N = 33.983.743.713.68
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Repartition Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.8 / Avg: 3.98 / Max: 4.09Min: 3.7 / Avg: 3.74 / Max: 3.77Min: 3.58 / Avg: 3.71 / Max: 3.84Min: 3.58 / Avg: 3.68 / Max: 3.84

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.04, N = 3SE +/- 0.07, N = 4SE +/- 0.06, N = 3SE +/- 0.11, N = 36.145.885.705.84
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Group By Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 6.09 / Avg: 6.14 / Max: 6.23Min: 5.69 / Avg: 5.88 / Max: 6.04Min: 5.6 / Avg: 5.7 / Max: 5.8Min: 5.7 / Avg: 5.84 / Max: 6.06

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.11, N = 3SE +/- 0.07, N = 4SE +/- 0.66, N = 3SE +/- 0.02, N = 324.3324.1824.8924.24
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 24.12 / Avg: 24.33 / Max: 24.51Min: 23.96 / Avg: 24.18 / Max: 24.28Min: 24.21 / Avg: 24.89 / Max: 26.21Min: 24.21 / Avg: 24.24 / Max: 24.26

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkDefault - IBRSretbleed=stuffmitigations=offretbleed=off90180270360450SE +/- 0.18, N = 3SE +/- 1.00, N = 4SE +/- 0.45, N = 3SE +/- 0.22, N = 3436.79437.77436.93436.32
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - Calculate Pi BenchmarkDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400Min: 436.43 / Avg: 436.79 / Max: 437.03Min: 436.18 / Avg: 437.77 / Max: 440.69Min: 436.11 / Avg: 436.93 / Max: 437.67Min: 435.95 / Avg: 436.32 / Max: 436.72

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.07, N = 3SE +/- 0.08, N = 4SE +/- 0.03, N = 3SE +/- 0.02, N = 36.496.115.896.29
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 6.36 / Avg: 6.49 / Max: 6.57Min: 5.93 / Avg: 6.11 / Max: 6.31Min: 5.84 / Avg: 5.89 / Max: 5.94Min: 6.25 / Avg: 6.29 / Max: 6.32

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7Default - IBRSretbleed=stuffmitigations=offretbleed=off0.01130.02260.03390.04520.0565SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.050.050.050.051. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread
OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7Default - IBRSretbleed=stuffmitigations=offretbleed=off12345Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.051. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MDefault - IBRSretbleed=stuffmitigations=offretbleed=off400800120016002000SE +/- 1.18, N = 3SE +/- 1.62, N = 3SE +/- 0.87, N = 3SE +/- 2.44, N = 32030.72031.12033.82029.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MDefault - IBRSretbleed=stuffmitigations=offretbleed=off400800120016002000Min: 2028.4 / Avg: 2030.73 / Max: 2032.2Min: 2028.3 / Avg: 2031.1 / Max: 2033.9Min: 2032.8 / Avg: 2033.77 / Max: 2035.5Min: 2024.5 / Avg: 2029 / Max: 2032.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.71781.43562.15342.87123.589SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.22, N = 3SE +/- 0.06, N = 33.142.993.192.98
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.08 / Avg: 3.14 / Max: 3.17Min: 2.87 / Avg: 2.99 / Max: 3.08Min: 2.87 / Avg: 3.19 / Max: 3.62Min: 2.88 / Avg: 2.98 / Max: 3.08

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.80551.6112.41653.2224.0275SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 33.583.563.533.52
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.53 / Avg: 3.58 / Max: 3.63Min: 3.39 / Avg: 3.56 / Max: 3.68Min: 3.51 / Avg: 3.53 / Max: 3.59Min: 3.46 / Avg: 3.52 / Max: 3.56

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.83031.66062.49093.32124.1515SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 33.693.543.513.51
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.6 / Avg: 3.69 / Max: 3.78Min: 3.47 / Avg: 3.54 / Max: 3.58Min: 3.4 / Avg: 3.51 / Max: 3.7Min: 3.4 / Avg: 3.51 / Max: 3.62

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.2152.433.6454.866.075SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 35.405.185.035.09
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.3 / Avg: 5.4 / Max: 5.54Min: 5.11 / Avg: 5.18 / Max: 5.27Min: 4.97 / Avg: 5.03 / Max: 5.08Min: 5.03 / Avg: 5.09 / Max: 5.23

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 324.5424.1624.0024.20
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using DataframeDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 24.38 / Avg: 24.54 / Max: 24.78Min: 24.09 / Avg: 24.16 / Max: 24.28Min: 23.88 / Avg: 24 / Max: 24.17Min: 23.98 / Avg: 24.2 / Max: 24.34

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkDefault - IBRSretbleed=stuffmitigations=offretbleed=off100200300400500SE +/- 0.93, N = 3SE +/- 0.48, N = 3SE +/- 1.14, N = 3SE +/- 0.28, N = 3441.23439.91441.21439.84
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi BenchmarkDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400Min: 440.1 / Avg: 441.23 / Max: 443.07Min: 439.37 / Avg: 439.91 / Max: 440.87Min: 439.98 / Avg: 441.21 / Max: 443.49Min: 439.56 / Avg: 439.84 / Max: 440.41

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.33432.66864.00295.33726.6715SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 35.905.935.635.69
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.78 / Avg: 5.9 / Max: 5.97Min: 5.8 / Avg: 5.93 / Max: 6.12Min: 5.52 / Avg: 5.63 / Max: 5.76Min: 5.6 / Avg: 5.69 / Max: 5.77

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Default - IBRSretbleed=stuffmitigations=offretbleed=off0.1260.2520.3780.5040.63SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.550.560.551. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.54 / Avg: 0.55 / Max: 0.55Min: 0.55 / Avg: 0.56 / Max: 0.56Min: 0.55 / Avg: 0.55 / Max: 0.551. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensDefault - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KSE +/- 10.16, N = 3SE +/- 78.16, N = 3SE +/- 104.72, N = 3SE +/- 39.43, N = 322195.319597.718521.419637.6MIN: 22175.14 / MAX: 24464.84MIN: 19441.37 / MAX: 21708.22MIN: 18335.77 / MAX: 20604.44MIN: 19560.27 / MAX: 21682.59
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 22175.14 / Avg: 22195.33 / Max: 22207.32Min: 19441.37 / Avg: 19597.7 / Max: 19675.95Min: 18335.77 / Avg: 18521.37 / Max: 18698.19Min: 19560.27 / Avg: 19637.62 / Max: 19689.62

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Default - IBRSretbleed=stuffmitigations=offretbleed=off0.12830.25660.38490.51320.6415SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.550.560.570.561. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.54 / Avg: 0.55 / Max: 0.56Min: 0.56 / Avg: 0.56 / Max: 0.56Min: 0.57 / Avg: 0.57 / Max: 0.57Min: 0.56 / Avg: 0.56 / Max: 0.571. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADDefault - IBRSretbleed=stuffmitigations=offretbleed=off90180270360450SE +/- 5.51, N = 3SE +/- 0.43, N = 3SE +/- 1.85, N = 3SE +/- 2.44, N = 3423.22410.61402.73413.52
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400Min: 412.24 / Avg: 423.22 / Max: 429.6Min: 410.11 / Avg: 410.61 / Max: 411.48Min: 400.55 / Avg: 402.73 / Max: 406.4Min: 410.3 / Avg: 413.52 / Max: 418.31

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.72, N = 3SE +/- 0.41, N = 11SE +/- 0.21, N = 3SE +/- 1.06, N = 479.5481.5583.6082.35MIN: 5.59 / MAX: 20000MIN: 5.79 / MAX: 20000MIN: 6.05 / MAX: 20000MIN: 5.89 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunDefault - IBRSretbleed=stuffmitigations=offretbleed=off1632486480Min: 78.58 / Avg: 79.54 / Max: 80.94Min: 78.57 / Avg: 81.55 / Max: 82.83Min: 83.28 / Avg: 83.6 / Max: 84.01Min: 79.18 / Avg: 82.35 / Max: 83.661. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.80, N = 3SE +/- 0.47, N = 11SE +/- 0.52, N = 3SE +/- 0.38, N = 476.8281.3783.1483.37MIN: 5.2 / MAX: 20000MIN: 5.83 / MAX: 20000MIN: 5.96 / MAX: 20000MIN: 5.89 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunDefault - IBRSretbleed=stuffmitigations=offretbleed=off1632486480Min: 75.54 / Avg: 76.82 / Max: 78.3Min: 78.11 / Avg: 81.37 / Max: 83.02Min: 82.19 / Avg: 83.14 / Max: 83.99Min: 82.52 / Avg: 83.37 / Max: 84.161. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 1.00, N = 3SE +/- 0.67, N = 11SE +/- 1.04, N = 3SE +/- 1.18, N = 474.8778.1880.1379.46MIN: 4.96 / MAX: 15000MIN: 5.58 / MAX: 20000MIN: 5.87 / MAX: 20000MIN: 5.8 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheDefault - IBRSretbleed=stuffmitigations=offretbleed=off1530456075Min: 73.54 / Avg: 74.87 / Max: 76.82Min: 72.51 / Avg: 78.18 / Max: 80.21Min: 78.11 / Avg: 80.13 / Max: 81.52Min: 75.92 / Avg: 79.46 / Max: 80.81. ClickHouse server version 22.5.4.19 (official build).

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500SE +/- 43.03, N = 15SE +/- 33.97, N = 7SE +/- 43.55, N = 15SE +/- 25.04, N = 153210.13177.83215.23140.9MIN: 2206.57 / MAX: 3464.54MIN: 2429.75 / MAX: 3365.98MIN: 2315.41 / MAX: 3533.89MIN: 2328.29 / MAX: 3358.6
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesDefault - IBRSretbleed=stuffmitigations=offretbleed=off6001200180024003000Min: 2854.53 / Avg: 3210.14 / Max: 3464.54Min: 3109.67 / Avg: 3177.78 / Max: 3365.98Min: 3011.92 / Avg: 3215.23 / Max: 3533.89Min: 2985.33 / Avg: 3140.88 / Max: 3358.6

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.270.540.811.081.35SE +/- 0.02, N = 15SE +/- 0.01, N = 12SE +/- 0.01, N = 12SE +/- 0.00, N = 31.20.70.70.8
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1 / Avg: 1.21 / Max: 1.3Min: 0.7 / Avg: 0.74 / Max: 0.8Min: 0.7 / Avg: 0.73 / Max: 0.8Min: 0.8 / Avg: 0.8 / Max: 0.8

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamDefault - IBRSretbleed=stuffmitigations=offretbleed=off90180270360450SE +/- 0.61, N = 3SE +/- 0.31, N = 3SE +/- 0.18, N = 3SE +/- 0.41, N = 3412.01404.13398.01403.77
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350Min: 410.91 / Avg: 412.01 / Max: 413.01Min: 403.63 / Avg: 404.13 / Max: 404.7Min: 397.68 / Avg: 398.01 / Max: 398.3Min: 403.06 / Avg: 403.77 / Max: 404.49

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarDefault - IBRSretbleed=stuffmitigations=offretbleed=off81624324036363636MIN: 4 / MAX: 701MIN: 4 / MAX: 700MIN: 4 / MAX: 701MIN: 4 / MAX: 701

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCDefault - IBRSretbleed=stuffmitigations=offretbleed=off1530456075SE +/- 0.33, N = 367676767MIN: 8 / MAX: 1035MIN: 8 / MAX: 1039MIN: 8 / MAX: 1038MIN: 8 / MAX: 1036
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265Min: 66 / Avg: 66.67 / Max: 67

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Default - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400SE +/- 0.22, N = 3SE +/- 0.48, N = 3SE +/- 1.22, N = 3SE +/- 0.25, N = 3379.22380.58379.28377.401. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Default - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350Min: 378.81 / Avg: 379.22 / Max: 379.58Min: 379.62 / Avg: 380.58 / Max: 381.1Min: 378.03 / Avg: 379.28 / Max: 381.73Min: 376.9 / Avg: 377.4 / Max: 377.741. (CXX) g++ options: -O3 -fPIC -lm

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400SE +/- 0.15, N = 3SE +/- 0.72, N = 3SE +/- 0.52, N = 3SE +/- 0.06, N = 3369.61368.09368.70367.55
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350Min: 369.36 / Avg: 369.61 / Max: 369.87Min: 367.29 / Avg: 368.09 / Max: 369.52Min: 367.8 / Avg: 368.7 / Max: 369.6Min: 367.46 / Avg: 367.55 / Max: 367.67

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000916.44913.50909.74910.871. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off2040608010089.6789.5989.4290.521. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250SE +/- 2.54, N = 6SE +/- 0.52, N = 3SE +/- 1.10, N = 15SE +/- 0.19, N = 3221.70167.70101.21142.981. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200Min: 218.58 / Avg: 221.7 / Max: 234.37Min: 166.89 / Avg: 167.7 / Max: 168.66Min: 96.97 / Avg: 101.21 / Max: 109.42Min: 142.59 / Avg: 142.98 / Max: 143.181. (CC) gcc options: -lpthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350SE +/- 0.54, N = 3SE +/- 0.36, N = 3SE +/- 0.38, N = 3SE +/- 0.17, N = 3308.94307.34306.77307.30
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300Min: 307.86 / Avg: 308.94 / Max: 309.54Min: 306.66 / Avg: 307.34 / Max: 307.9Min: 306.15 / Avg: 306.77 / Max: 307.47Min: 306.96 / Avg: 307.3 / Max: 307.54

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300SE +/- 1.63, N = 3SE +/- 1.01, N = 3SE +/- 0.76, N = 3SE +/- 1.01, N = 3287.90271.33260.62266.84
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250Min: 285.02 / Avg: 287.9 / Max: 290.68Min: 270.18 / Avg: 271.33 / Max: 273.34Min: 259.16 / Avg: 260.62 / Max: 261.71Min: 265.59 / Avg: 266.84 / Max: 268.83

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300SE +/- 2.15, N = 3SE +/- 1.77, N = 3SE +/- 0.88, N = 3SE +/- 1.62, N = 3262.98264.63262.71259.64
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250Min: 259.79 / Avg: 262.98 / Max: 267.07Min: 262.33 / Avg: 264.63 / Max: 268.11Min: 260.97 / Avg: 262.71 / Max: 263.79Min: 257.77 / Avg: 259.64 / Max: 262.86

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeDefault - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 54.55, N = 3SE +/- 30.85, N = 3SE +/- 41.07, N = 3SE +/- 92.58, N = 313903.112739.711106.712396.2MIN: 10489.9 / MAX: 14005.14MIN: 9693.86 / MAX: 12792.39MIN: 8323.26 / MAX: 11185.64MIN: 9295.02 / MAX: 12494.23
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeDefault - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 13818.6 / Avg: 13903.12 / Max: 14005.14Min: 12685.56 / Avg: 12739.66 / Max: 12792.39Min: 11047.51 / Avg: 11106.74 / Max: 11185.64Min: 12211.11 / Avg: 12396.16 / Max: 12494.23

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Default - IBRSretbleed=stuffmitigations=offretbleed=off1122334455SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 3SE +/- 0.18, N = 346.8745.8543.4146.33MIN: 46.67 / MAX: 67.33MIN: 45.65 / MAX: 64.31MIN: 42.98 / MAX: 77.27MIN: 46.01 / MAX: 65.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Default - IBRSretbleed=stuffmitigations=offretbleed=off1020304050Min: 46.84 / Avg: 46.87 / Max: 46.95Min: 45.79 / Avg: 45.85 / Max: 45.88Min: 43.11 / Avg: 43.41 / Max: 43.6Min: 46.15 / Avg: 46.33 / Max: 46.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Default - IBRSretbleed=stuffmitigations=offretbleed=off1.02832.05663.08494.11325.1415SE +/- 0.005, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 3SE +/- 0.012, N = 34.5704.5014.4214.528MIN: 4.53 / MAX: 24.87MIN: 4.46 / MAX: 5.13MIN: 4.38 / MAX: 19.05MIN: 4.48 / MAX: 8.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 4.56 / Avg: 4.57 / Max: 4.58Min: 4.5 / Avg: 4.5 / Max: 4.5Min: 4.42 / Avg: 4.42 / Max: 4.43Min: 4.52 / Avg: 4.53 / Max: 4.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Default - IBRSretbleed=stuffmitigations=offretbleed=off1.042.083.124.165.2SE +/- 0.003, N = 3SE +/- 0.006, N = 3SE +/- 0.015, N = 3SE +/- 0.027, N = 34.6224.4894.2334.515MIN: 4.58 / MAX: 6.62MIN: 4.43 / MAX: 5.73MIN: 4.17 / MAX: 7.41MIN: 4.44 / MAX: 5.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 4.62 / Avg: 4.62 / Max: 4.63Min: 4.48 / Avg: 4.49 / Max: 4.5Min: 4.21 / Avg: 4.23 / Max: 4.26Min: 4.49 / Avg: 4.52 / Max: 4.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Default - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.005, N = 3SE +/- 0.013, N = 3SE +/- 0.029, N = 3SE +/- 0.017, N = 36.6626.4836.1246.589MIN: 6.58 / MAX: 8.88MIN: 6.42 / MAX: 8.8MIN: 6.04 / MAX: 16.59MIN: 6.52 / MAX: 13.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Default - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 6.65 / Avg: 6.66 / Max: 6.67Min: 6.47 / Avg: 6.48 / Max: 6.51Min: 6.09 / Avg: 6.12 / Max: 6.18Min: 6.57 / Avg: 6.59 / Max: 6.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50Default - IBRSretbleed=stuffmitigations=offretbleed=off918273645SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.11, N = 341.0539.5835.4440.21MIN: 40.85 / MAX: 88.28MIN: 38.97 / MAX: 57.82MIN: 35.31 / MAX: 50.16MIN: 39.91 / MAX: 59.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50Default - IBRSretbleed=stuffmitigations=offretbleed=off918273645Min: 41.02 / Avg: 41.05 / Max: 41.09Min: 39.56 / Avg: 39.58 / Max: 39.61Min: 35.42 / Avg: 35.44 / Max: 35.46Min: 40.07 / Avg: 40.21 / Max: 40.431. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Default - IBRSretbleed=stuffmitigations=offretbleed=off1.01072.02143.03214.04285.0535SE +/- 0.001, N = 3SE +/- 0.016, N = 3SE +/- 0.013, N = 3SE +/- 0.016, N = 34.4924.3583.8744.417MIN: 4.44 / MAX: 9.15MIN: 4.28 / MAX: 29.76MIN: 3.82 / MAX: 4.21MIN: 4.33 / MAX: 23.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 4.49 / Avg: 4.49 / Max: 4.49Min: 4.34 / Avg: 4.36 / Max: 4.39Min: 3.85 / Avg: 3.87 / Max: 3.9Min: 4.4 / Avg: 4.42 / Max: 4.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Default - IBRSretbleed=stuffmitigations=offretbleed=off0.5041.0081.5122.0162.52SE +/- 0.006, N = 3SE +/- 0.007, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 32.2402.0581.7972.102MIN: 2.19 / MAX: 18.29MIN: 2.02 / MAX: 3.45MIN: 1.77 / MAX: 5.04MIN: 2.07 / MAX: 6.281. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 2.23 / Avg: 2.24 / Max: 2.25Min: 2.05 / Avg: 2.06 / Max: 2.07Min: 1.79 / Avg: 1.8 / Max: 1.8Min: 2.1 / Avg: 2.1 / Max: 2.111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 316.5715.9014.1916.42MIN: 16.41 / MAX: 57.47MIN: 15.64 / MAX: 34.5MIN: 14.02 / MAX: 28.42MIN: 16.2 / MAX: 35.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 16.51 / Avg: 16.57 / Max: 16.63Min: 15.77 / Avg: 15.9 / Max: 16.12Min: 14.1 / Avg: 14.19 / Max: 14.25Min: 16.3 / Avg: 16.42 / Max: 16.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyDefault - IBRSretbleed=stuffmitigations=offretbleed=off30060090012001500SE +/- 4.99, N = 3SE +/- 14.29, N = 15SE +/- 16.88, N = 15SE +/- 13.79, N = 31142.41141.51150.81215.9MIN: 867.97 / MAX: 2211.31MIN: 836.06 / MAX: 2372.4MIN: 820.44 / MAX: 2321.46MIN: 824.75 / MAX: 2207.54
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000Min: 1136.81 / Avg: 1142.39 / Max: 1152.34Min: 1093.54 / Avg: 1141.49 / Max: 1258.4Min: 1080.97 / Avg: 1150.83 / Max: 1225.79Min: 1201.17 / Avg: 1215.94 / Max: 1243.49

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 10509.17, N = 11SE +/- 17147.18, N = 4SE +/- 48717.93, N = 15SE +/- 18395.69, N = 151175178.961232491.421481936.111329151.411. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KMin: 1126700.66 / Avg: 1175178.96 / Max: 1225111.9Min: 1185501.78 / Avg: 1232491.42 / Max: 1258521.45Min: 1229191.06 / Avg: 1481936.11 / Max: 1827341.63Min: 1226872.37 / Avg: 1329151.41 / Max: 1483889.051. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricDefault - IBRSretbleed=stuffmitigations=offretbleed=off11K22K33K44K55K500865006450104503581. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseDefault - IBRSretbleed=stuffmitigations=offretbleed=off100200300400500SE +/- 1.81, N = 3SE +/- 6.95, N = 3SE +/- 4.57, N = 3SE +/- 4.15, N = 3456.7458.7476.3474.6
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400Min: 454.2 / Avg: 456.67 / Max: 460.2Min: 444.9 / Avg: 458.73 / Max: 466.9Min: 468.1 / Avg: 476.27 / Max: 483.9Min: 466.6 / Avg: 474.6 / Max: 480.5

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformDefault - IBRSretbleed=stuffmitigations=offretbleed=off1530456075SE +/- 0.10, N = 3SE +/- 0.19, N = 3SE +/- 0.15, N = 3SE +/- 0.13, N = 368.268.768.468.4
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265Min: 68 / Avg: 68.2 / Max: 68.3Min: 68.3 / Avg: 68.67 / Max: 68.9Min: 68.1 / Avg: 68.37 / Max: 68.6Min: 68.1 / Avg: 68.37 / Max: 68.5

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400SE +/- 0.35, N = 3SE +/- 1.01, N = 3SE +/- 1.09, N = 3SE +/- 0.71, N = 3356.7357.0362.6362.3
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300Min: 356 / Avg: 356.7 / Max: 357.1Min: 355 / Avg: 357 / Max: 358.2Min: 361.4 / Avg: 362.63 / Max: 364.8Min: 360.9 / Avg: 362.3 / Max: 363.2

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersDefault - IBRSretbleed=stuffmitigations=offretbleed=off130260390520650SE +/- 2.86, N = 3SE +/- 4.25, N = 3SE +/- 1.66, N = 3SE +/- 2.34, N = 3577.3576.2593.7593.0
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersDefault - IBRSretbleed=stuffmitigations=offretbleed=off100200300400500Min: 574 / Avg: 577.3 / Max: 583Min: 570.5 / Avg: 576.2 / Max: 584.5Min: 591.3 / Avg: 593.73 / Max: 596.9Min: 589.1 / Avg: 593 / Max: 597.2

Selenium

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.74, N = 3SE +/- 0.35, N = 3SE +/- 0.48, N = 3SE +/- 1.06, N = 285.2488.6590.0588.461. firefox 108.0
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 84.04 / Avg: 85.24 / Max: 86.58Min: 88.04 / Avg: 88.65 / Max: 89.27Min: 89.21 / Avg: 90.05 / Max: 90.87Min: 87.4 / Avg: 88.46 / Max: 89.521. firefox 108.0

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.21052.4213.63154.8426.0525SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.05, N = 25.385.034.805.05MIN: 5.27 / MAX: 5.74MIN: 4.96 / MAX: 5.18MIN: 4.76 / MAX: 5.78MIN: 4.95 / MAX: 6.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.32 / Avg: 5.38 / Max: 5.45Min: 5 / Avg: 5.03 / Max: 5.04Min: 4.8 / Avg: 4.8 / Max: 4.81Min: 4.99 / Avg: 5.05 / Max: 5.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerDefault - IBRSretbleed=stuffmitigations=offretbleed=off90180270360450SE +/- 5.45, N = 3SE +/- 3.73, N = 3SE +/- 0.09, N = 3SE +/- 8.49, N = 3422.28421.62413.10422.01MIN: 413.3 / MAX: 450.85MIN: 413.1 / MAX: 442MIN: 412.4 / MAX: 425.33MIN: 412.79 / MAX: 552.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400Min: 414.05 / Avg: 422.28 / Max: 432.59Min: 414.34 / Avg: 421.62 / Max: 426.7Min: 412.95 / Avg: 413.1 / Max: 413.27Min: 413.48 / Avg: 422.01 / Max: 438.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 313.5511.7510.6111.70MIN: 13.43 / MAX: 16.58MIN: 11.68 / MAX: 14.61MIN: 10.55 / MAX: 10.88MIN: 11.6 / MAX: 23.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 13.51 / Avg: 13.55 / Max: 13.57Min: 11.73 / Avg: 11.75 / Max: 11.78Min: 10.6 / Avg: 10.61 / Max: 10.62Min: 11.68 / Avg: 11.7 / Max: 11.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.22, N = 322.9522.5522.3522.78MIN: 22.77 / MAX: 25.96MIN: 22.43 / MAX: 23.74MIN: 22.23 / MAX: 23.47MIN: 22.43 / MAX: 33.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025Min: 22.86 / Avg: 22.95 / Max: 23.07Min: 22.51 / Avg: 22.55 / Max: 22.57Min: 22.34 / Avg: 22.35 / Max: 22.37Min: 22.56 / Avg: 22.78 / Max: 23.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.09, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.10, N = 332.5132.3132.1932.42MIN: 32.3 / MAX: 35.45MIN: 32.17 / MAX: 40.14MIN: 32.06 / MAX: 35.39MIN: 32.2 / MAX: 35.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835Min: 32.42 / Avg: 32.51 / Max: 32.69Min: 32.29 / Avg: 32.31 / Max: 32.32Min: 32.18 / Avg: 32.19 / Max: 32.2Min: 32.31 / Avg: 32.42 / Max: 32.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Default - IBRSretbleed=stuffmitigations=offretbleed=off714212835SE +/- 0.09, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 331.8431.5031.2931.50MIN: 31.6 / MAX: 45.19MIN: 31.34 / MAX: 34.28MIN: 31.16 / MAX: 32.7MIN: 31.34 / MAX: 32.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Default - IBRSretbleed=stuffmitigations=offretbleed=off714212835Min: 31.73 / Avg: 31.84 / Max: 32.01Min: 31.47 / Avg: 31.5 / Max: 31.53Min: 31.28 / Avg: 31.29 / Max: 31.3Min: 31.46 / Avg: 31.5 / Max: 31.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 311.7111.6611.5611.62MIN: 11.65 / MAX: 11.92MIN: 11.59 / MAX: 11.8MIN: 11.5 / MAX: 12.06MIN: 11.57 / MAX: 12.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 11.7 / Avg: 11.71 / Max: 11.73Min: 11.65 / Avg: 11.66 / Max: 11.66Min: 11.55 / Avg: 11.56 / Max: 11.58Min: 11.62 / Avg: 11.62 / Max: 11.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Default - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 314.3714.2514.1214.30MIN: 14.25 / MAX: 17.15MIN: 14.14 / MAX: 15.65MIN: 14.02 / MAX: 14.36MIN: 14.1 / MAX: 29.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Default - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 14.36 / Avg: 14.37 / Max: 14.38Min: 14.24 / Avg: 14.25 / Max: 14.26Min: 14.11 / Avg: 14.12 / Max: 14.13Min: 14.2 / Avg: 14.3 / Max: 14.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Default - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 378.3078.3878.2678.35MIN: 78.11 / MAX: 81.22MIN: 78.15 / MAX: 90.52MIN: 78.05 / MAX: 89.98MIN: 78.11 / MAX: 89.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Default - IBRSretbleed=stuffmitigations=offretbleed=off1530456075Min: 78.27 / Avg: 78.3 / Max: 78.34Min: 78.31 / Avg: 78.38 / Max: 78.47Min: 78.21 / Avg: 78.26 / Max: 78.31Min: 78.32 / Avg: 78.35 / Max: 78.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 316.8616.5416.2916.50MIN: 16.75 / MAX: 18.33MIN: 16.45 / MAX: 16.88MIN: 16.22 / MAX: 16.6MIN: 16.42 / MAX: 16.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 16.84 / Avg: 16.86 / Max: 16.88Min: 16.52 / Avg: 16.54 / Max: 16.56Min: 16.28 / Avg: 16.29 / Max: 16.29Min: 16.49 / Avg: 16.5 / Max: 16.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.29030.58060.87091.16121.4515SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.291.110.991.08MIN: 1.27 / MAX: 1.5MIN: 1.08 / MAX: 1.38MIN: 0.97 / MAX: 1.26MIN: 1.06 / MAX: 1.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.29 / Avg: 1.29 / Max: 1.3Min: 1.1 / Avg: 1.11 / Max: 1.12Min: 0.99 / Avg: 0.99 / Max: 0.99Min: 1.08 / Avg: 1.08 / Max: 1.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Default - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 310.189.839.609.79MIN: 10.11 / MAX: 10.58MIN: 9.75 / MAX: 21.91MIN: 9.52 / MAX: 21.53MIN: 9.74 / MAX: 9.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Default - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 10.16 / Avg: 10.18 / Max: 10.19Min: 9.81 / Avg: 9.83 / Max: 9.87Min: 9.57 / Avg: 9.6 / Max: 9.64Min: 9.79 / Avg: 9.79 / Max: 9.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.14752.2953.44254.595.7375SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 35.104.854.684.84MIN: 5.04 / MAX: 5.46MIN: 4.8 / MAX: 5.14MIN: 4.64 / MAX: 7.37MIN: 4.79 / MAX: 7.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.08 / Avg: 5.1 / Max: 5.11Min: 4.85 / Avg: 4.85 / Max: 4.86Min: 4.68 / Avg: 4.68 / Max: 4.69Min: 4.83 / Avg: 4.84 / Max: 4.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Default - IBRSretbleed=stuffmitigations=offretbleed=off0.94731.89462.84193.78924.7365SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.213.913.743.89MIN: 4.16 / MAX: 4.34MIN: 3.87 / MAX: 4.05MIN: 3.69 / MAX: 6.95MIN: 3.85 / MAX: 5.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 4.2 / Avg: 4.21 / Max: 4.22Min: 3.91 / Avg: 3.91 / Max: 3.92Min: 3.73 / Avg: 3.74 / Max: 3.74Min: 3.89 / Avg: 3.89 / Max: 3.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Default - IBRSretbleed=stuffmitigations=offretbleed=off1.18582.37163.55744.74325.929SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.274.994.844.98MIN: 5.2 / MAX: 5.71MIN: 4.95 / MAX: 5.22MIN: 4.78 / MAX: 16.71MIN: 4.93 / MAX: 5.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.25 / Avg: 5.27 / Max: 5.28Min: 4.99 / Avg: 4.99 / Max: 5Min: 4.82 / Avg: 4.84 / Max: 4.86Min: 4.97 / Avg: 4.98 / Max: 4.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Default - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.546.296.116.26MIN: 6.46 / MAX: 6.94MIN: 6.21 / MAX: 6.49MIN: 6.04 / MAX: 6.27MIN: 6.19 / MAX: 6.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Default - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 6.53 / Avg: 6.54 / Max: 6.54Min: 6.28 / Avg: 6.29 / Max: 6.3Min: 6.09 / Avg: 6.11 / Max: 6.12Min: 6.25 / Avg: 6.26 / Max: 6.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025SE +/- 0.09, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.16, N = 322.1221.7521.5621.89MIN: 21.87 / MAX: 33.68MIN: 21.62 / MAX: 23.09MIN: 21.42 / MAX: 22.85MIN: 21.62 / MAX: 22.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025Min: 21.99 / Avg: 22.12 / Max: 22.3Min: 21.74 / Avg: 21.75 / Max: 21.76Min: 21.53 / Avg: 21.56 / Max: 21.59Min: 21.73 / Avg: 21.89 / Max: 22.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Default - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.175.746.205.941. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.16 / Avg: 5.17 / Max: 5.17Min: 5.74 / Avg: 5.74 / Max: 5.75Min: 6.2 / Avg: 6.2 / Max: 6.21Min: 5.94 / Avg: 5.94 / Max: 5.951. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250SE +/- 0.25, N = 3SE +/- 0.20, N = 3SE +/- 0.18, N = 3SE +/- 0.31, N = 3220.87207.35198.22204.11
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200Min: 220.49 / Avg: 220.87 / Max: 221.35Min: 206.96 / Avg: 207.35 / Max: 207.57Min: 197.92 / Avg: 198.22 / Max: 198.54Min: 203.56 / Avg: 204.11 / Max: 204.64

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Default - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.456.036.506.241. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Default - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 5.45 / Avg: 5.45 / Max: 5.45Min: 6.02 / Avg: 6.03 / Max: 6.04Min: 6.49 / Avg: 6.5 / Max: 6.5Min: 6.24 / Avg: 6.24 / Max: 6.241. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off0.19340.38680.58020.77360.967SE +/- 0.000732, N = 3SE +/- 0.000670, N = 3SE +/- 0.000243, N = 3SE +/- 0.000460, N = 30.8597590.8567600.8577440.8594121. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.86 / Avg: 0.86 / Max: 0.86Min: 0.86 / Avg: 0.86 / Max: 0.86Min: 0.86 / Avg: 0.86 / Max: 0.86Min: 0.86 / Avg: 0.86 / Max: 0.861. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 14120.93, N = 3SE +/- 11938.38, N = 3SE +/- 27491.98, N = 15SE +/- 20904.07, N = 121109992.571180753.401288891.751220692.331. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 1085947.18 / Avg: 1109992.57 / Max: 1134843.6Min: 1168197.88 / Avg: 1180753.4 / Max: 1204619.33Min: 1133167.88 / Avg: 1288891.75 / Max: 1510614.04Min: 1125383.27 / Avg: 1220692.33 / Max: 1354296.771. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.31, N = 3SE +/- 0.35, N = 3SE +/- 0.15, N = 3SE +/- 4.63, N = 12108.5111.9111.2105.9
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 108.1 / Avg: 108.5 / Max: 109.1Min: 111.3 / Avg: 111.93 / Max: 112.5Min: 111 / Avg: 111.2 / Max: 111.5Min: 55.4 / Avg: 105.88 / Max: 111.9

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Default - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.055.606.055.791. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.05 / Avg: 5.05 / Max: 5.06Min: 5.6 / Avg: 5.6 / Max: 5.6Min: 6.04 / Avg: 6.05 / Max: 6.05Min: 5.78 / Avg: 5.79 / Max: 5.791. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Default - IBRSretbleed=stuffmitigations=offretbleed=off0.20920.41840.62760.83681.046SE +/- 0.000224, N = 3SE +/- 0.000807, N = 3SE +/- 0.000455, N = 3SE +/- 0.000035, N = 30.9289780.9265040.9275380.9299551. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.93 / Avg: 0.93 / Max: 0.93Min: 0.92 / Avg: 0.93 / Max: 0.93Min: 0.93 / Avg: 0.93 / Max: 0.93Min: 0.93 / Avg: 0.93 / Max: 0.931. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Default - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200SE +/- 0.26, N = 3SE +/- 0.27, N = 3SE +/- 0.56, N = 3SE +/- 0.22, N = 3171.47171.57170.09169.841. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Default - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 171.11 / Avg: 171.47 / Max: 171.97Min: 171.1 / Avg: 171.57 / Max: 172.02Min: 169.45 / Avg: 170.09 / Max: 171.21Min: 169.47 / Avg: 169.84 / Max: 170.231. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Default - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.325.886.316.061. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Default - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 5.32 / Avg: 5.32 / Max: 5.32Min: 5.87 / Avg: 5.88 / Max: 5.88Min: 6.31 / Avg: 6.31 / Max: 6.32Min: 6.06 / Avg: 6.06 / Max: 6.071. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -pthread -latomic

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Default - IBRSretbleed=stuffmitigations=offretbleed=off1.12912.25823.38734.51645.6455SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 35.0175.0174.9965.0181. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.02 / Avg: 5.02 / Max: 5.02Min: 5.02 / Avg: 5.02 / Max: 5.02Min: 4.99 / Avg: 5 / Max: 5Min: 5.02 / Avg: 5.02 / Max: 5.021. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Default - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3125.42125.43124.91125.451. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Default - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 125.4 / Avg: 125.42 / Max: 125.43Min: 125.42 / Avg: 125.43 / Max: 125.44Min: 124.84 / Avg: 124.91 / Max: 125.01Min: 125.43 / Avg: 125.45 / Max: 125.471. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250SE +/- 0.48, N = 3SE +/- 0.25, N = 3SE +/- 0.33, N = 3SE +/- 0.47, N = 3217.36166.8195.57143.011. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 16 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200Min: 216.66 / Avg: 217.36 / Max: 218.27Min: 166.38 / Avg: 166.81 / Max: 167.25Min: 94.9 / Avg: 95.57 / Max: 95.94Min: 142.35 / Avg: 143.01 / Max: 143.931. (CC) gcc options: -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicDefault - IBRSretbleed=stuffmitigations=offretbleed=off40K80K120K160K200KSE +/- 2552.57, N = 15SE +/- 2632.45, N = 15SE +/- 2604.63, N = 15SE +/- 2454.73, N = 15180741.36182114.03180081.27180772.461. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicDefault - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KMin: 170731.07 / Avg: 180741.36 / Max: 197543.93Min: 171093.95 / Avg: 182114.03 / Max: 198032.41Min: 171761.54 / Avg: 180081.27 / Max: 197686.18Min: 170538.3 / Avg: 180772.46 / Max: 196999.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.24080.48160.72240.96321.204SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 31.0641.0681.0701.0691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.06 / Avg: 1.06 / Max: 1.06Min: 1.07 / Avg: 1.07 / Max: 1.07Min: 1.07 / Avg: 1.07 / Max: 1.07Min: 1.07 / Avg: 1.07 / Max: 1.071. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.13, N = 312.7713.3213.7212.251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 12.68 / Avg: 12.77 / Max: 12.83Min: 13.25 / Avg: 13.32 / Max: 13.43Min: 13.58 / Avg: 13.72 / Max: 13.8Min: 12.11 / Avg: 12.25 / Max: 12.511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500SE +/- 14.43, N = 3SE +/- 15.39, N = 3SE +/- 18.41, N = 3SE +/- 42.75, N = 339163753364540831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500Min: 3897.78 / Avg: 3916.22 / Max: 3944.66Min: 3722.9 / Avg: 3753.1 / Max: 3773.3Min: 3622.22 / Avg: 3645.31 / Max: 3681.69Min: 3997.55 / Avg: 4082.95 / Max: 4129.111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.01130.02260.03390.04520.0565SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0500.0500.0480.0501. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off12345Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 204.73, N = 3SE +/- 19.47, N = 3SE +/- 106.24, N = 3SE +/- 14.57, N = 3199392014020598201151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 19539.98 / Avg: 19938.77 / Max: 20218.61Min: 20106.41 / Avg: 20139.87 / Max: 20173.85Min: 20387.31 / Avg: 20597.82 / Max: 20728.07Min: 20085.89 / Avg: 20114.82 / Max: 20132.411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025SE +/- 0.21, N = 3SE +/- 0.27, N = 3SE +/- 0.36, N = 3SE +/- 0.18, N = 321.5922.3822.5919.851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025Min: 21.23 / Avg: 21.59 / Max: 21.97Min: 21.87 / Avg: 22.38 / Max: 22.77Min: 21.88 / Avg: 22.59 / Max: 23.08Min: 19.49 / Avg: 19.85 / Max: 20.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off11002200330044005500SE +/- 45.81, N = 3SE +/- 53.82, N = 3SE +/- 72.12, N = 3SE +/- 46.69, N = 346334470442950391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500Min: 4551 / Avg: 4632.75 / Max: 4709.46Min: 4392.71 / Avg: 4470.06 / Max: 4573.56Min: 4332.92 / Avg: 4428.61 / Max: 4569.91Min: 4976.76 / Avg: 5039.23 / Max: 5130.581. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.22550.4510.67650.9021.1275SE +/- 0.004, N = 3SE +/- 0.011, N = 3SE +/- 0.003, N = 3SE +/- 0.003, N = 31.0020.9300.8230.8961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.99 / Avg: 1 / Max: 1.01Min: 0.91 / Avg: 0.93 / Max: 0.94Min: 0.82 / Avg: 0.82 / Max: 0.83Min: 0.89 / Avg: 0.9 / Max: 0.91. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KSE +/- 387.80, N = 3SE +/- 1297.33, N = 3SE +/- 464.85, N = 3SE +/- 363.63, N = 3998371075651215441116021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off20K40K60K80K100KMin: 99365.52 / Avg: 99837.2 / Max: 100606.24Min: 106024.73 / Avg: 107565.21 / Max: 110143.6Min: 120997.24 / Avg: 121544.06 / Max: 122468.62Min: 110928.62 / Avg: 111602.35 / Max: 112176.391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.11270.22540.33810.45080.5635SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 30.5010.4610.4140.4501. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.41 / Avg: 0.41 / Max: 0.42Min: 0.45 / Avg: 0.45 / Max: 0.451. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KSE +/- 252.50, N = 3SE +/- 180.13, N = 3SE +/- 778.58, N = 3SE +/- 623.45, N = 3998951083081208181112471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off20K40K60K80K100KMin: 99422.74 / Avg: 99894.85 / Max: 100286.16Min: 108123.07 / Avg: 108307.91 / Max: 108668.14Min: 119469.21 / Avg: 120818.43 / Max: 122166.3Min: 110355.8 / Avg: 111247.1 / Max: 112447.911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.2062.4123.6184.8246.03SE +/- 0.009, N = 3SE +/- 0.035, N = 3SE +/- 0.006, N = 3SE +/- 0.030, N = 35.3225.3605.3565.2611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.3 / Avg: 5.32 / Max: 5.33Min: 5.31 / Avg: 5.36 / Max: 5.43Min: 5.34 / Avg: 5.36 / Max: 5.37Min: 5.21 / Avg: 5.26 / Max: 5.311. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200SE +/- 0.32, N = 3SE +/- 1.22, N = 3SE +/- 0.22, N = 3SE +/- 1.10, N = 31881871871901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 187.52 / Avg: 187.92 / Max: 188.55Min: 184.2 / Avg: 186.57 / Max: 188.24Min: 186.4 / Avg: 186.72 / Max: 187.13Min: 188.25 / Avg: 190.08 / Max: 192.041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedDefault - IBRSretbleed=stuffmitigations=offretbleed=off100200300400500442.22438.60435.08436.20

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresDefault - IBRSretbleed=stuffmitigations=offretbleed=off160K320K480K640K800KSE +/- 6946.74, N = 15SE +/- 7121.86, N = 15SE +/- 5827.42, N = 15SE +/- 7291.15, N = 12699438.78721201.26755001.97728080.341. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresDefault - IBRSretbleed=stuffmitigations=offretbleed=off130K260K390K520K650KMin: 669045.47 / Avg: 699438.78 / Max: 779611.51Min: 693085.34 / Avg: 721201.26 / Max: 786042.43Min: 710608.04 / Avg: 755001.97 / Max: 816223.83Min: 682590.99 / Avg: 728080.34 / Max: 785140.711. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500SE +/- 23.20, N = 3SE +/- 32.03, N = 3SE +/- 2.22, N = 3SE +/- 48.87, N = 34244.44158.84071.44017.6MIN: 3945.79 / MAX: 4429.16MIN: 3865.05 / MAX: 4442.02MIN: 3791.67 / MAX: 4350.58MIN: 3606.13 / MAX: 4304.31
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500Min: 4219.75 / Avg: 4244.43 / Max: 4290.8Min: 4106.87 / Avg: 4158.84 / Max: 4217.27Min: 4067.03 / Avg: 4071.38 / Max: 4074.35Min: 3931.43 / Avg: 4017.61 / Max: 4100.62

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.35, N = 3SE +/- 0.44, N = 3SE +/- 0.28, N = 3SE +/- 0.44, N = 3142.63138.07134.19136.24
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 142.23 / Avg: 142.63 / Max: 143.34Min: 137.48 / Avg: 138.06 / Max: 138.92Min: 133.68 / Avg: 134.19 / Max: 134.64Min: 135.58 / Avg: 136.24 / Max: 137.07

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 15330.54, N = 3SE +/- 10395.85, N = 3SE +/- 12805.02, N = 15SE +/- 7047.94, N = 31231773.001286001.551403515.541317556.821. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 1209637.46 / Avg: 1231773 / Max: 1261214.4Min: 1272251.36 / Avg: 1286001.55 / Max: 1306382.95Min: 1334987.11 / Avg: 1403515.54 / Max: 1520243.14Min: 1305544.95 / Avg: 1317556.82 / Max: 1329950.771. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off0.26620.53240.79861.06481.331SE +/- 0.000675, N = 3SE +/- 0.000431, N = 3SE +/- 0.000280, N = 3SE +/- 0.000339, N = 31.1810601.1811431.1832431.1818811. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.18 / Avg: 1.18 / Max: 1.18Min: 1.18 / Avg: 1.18 / Max: 1.18Min: 1.18 / Avg: 1.18 / Max: 1.18Min: 1.18 / Avg: 1.18 / Max: 1.181. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off90180270360450SE +/- 4.05, N = 3SE +/- 1.95, N = 3SE +/- 0.80, N = 3SE +/- 2.01, N = 3402.58383.84380.45372.961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350Min: 397.32 / Avg: 402.58 / Max: 410.54Min: 381.37 / Avg: 383.84 / Max: 387.7Min: 378.86 / Avg: 380.45 / Max: 381.44Min: 370.93 / Avg: 372.96 / Max: 376.981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300SE +/- 2.48, N = 3SE +/- 1.32, N = 3SE +/- 0.56, N = 3SE +/- 1.44, N = 32482612632681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250Min: 243.58 / Avg: 248.45 / Max: 251.69Min: 257.93 / Avg: 260.54 / Max: 262.21Min: 262.16 / Avg: 262.85 / Max: 263.95Min: 265.26 / Avg: 268.14 / Max: 269.591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200SE +/- 0.16, N = 3SE +/- 0.34, N = 3SE +/- 0.42, N = 3SE +/- 0.31, N = 3188.24179.10179.92177.361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 188.05 / Avg: 188.24 / Max: 188.56Min: 178.62 / Avg: 179.1 / Max: 179.75Min: 179.4 / Avg: 179.92 / Max: 180.76Min: 177.05 / Avg: 177.36 / Max: 177.971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300SE +/- 0.23, N = 3SE +/- 0.53, N = 3SE +/- 0.65, N = 3SE +/- 0.48, N = 32662792782821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250Min: 265.16 / Avg: 265.62 / Max: 265.89Min: 278.16 / Avg: 279.17 / Max: 279.93Min: 276.61 / Avg: 277.91 / Max: 278.71Min: 280.94 / Avg: 281.91 / Max: 282.41. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.01130.02260.03390.04520.0565SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0500.0500.0460.0491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off12345Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KSE +/- 37.34, N = 3SE +/- 18.45, N = 3SE +/- 85.96, N = 3SE +/- 93.24, N = 3200372017921888203901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 19963.58 / Avg: 20037.02 / Max: 20085.46Min: 20142.64 / Avg: 20179.11 / Max: 20202.25Min: 21724.84 / Avg: 21887.56 / Max: 22017Min: 20258.7 / Avg: 20389.65 / Max: 20570.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.22030.44060.66090.88121.1015SE +/- 0.000, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 30.9790.9030.8170.8751. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.98 / Avg: 0.98 / Max: 0.98Min: 0.9 / Avg: 0.9 / Max: 0.91Min: 0.82 / Avg: 0.82 / Max: 0.82Min: 0.87 / Avg: 0.88 / Max: 0.881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KSE +/- 22.05, N = 3SE +/- 433.11, N = 3SE +/- 203.82, N = 3SE +/- 543.52, N = 31020991106711224331143031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off20K40K60K80K100KMin: 102058.33 / Avg: 102099.11 / Max: 102134.03Min: 109857.37 / Avg: 110671.48 / Max: 111334.81Min: 122036.08 / Avg: 122432.86 / Max: 122712.16Min: 113356.93 / Avg: 114303.35 / Max: 115239.671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.14412.28823.43234.57645.7205SE +/- 0.008, N = 3SE +/- 0.017, N = 3SE +/- 0.040, N = 3SE +/- 0.035, N = 35.0854.9994.9614.9271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.07 / Avg: 5.09 / Max: 5.09Min: 4.97 / Avg: 5 / Max: 5.03Min: 4.89 / Avg: 4.96 / Max: 5.03Min: 4.86 / Avg: 4.93 / Max: 4.981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200SE +/- 0.31, N = 3SE +/- 0.68, N = 3SE +/- 1.63, N = 3SE +/- 1.43, N = 31972002022031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read WriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200Min: 196.35 / Avg: 196.67 / Max: 197.3Min: 198.76 / Avg: 200.03 / Max: 201.08Min: 198.93 / Avg: 201.6 / Max: 204.55Min: 200.74 / Avg: 202.98 / Max: 205.631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.10980.21960.32940.43920.549SE +/- 0.002, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 30.4880.4460.3950.4331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.44 / Avg: 0.45 / Max: 0.45Min: 0.39 / Avg: 0.4 / Max: 0.4Min: 0.43 / Avg: 0.43 / Max: 0.441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KSE +/- 363.26, N = 3SE +/- 934.91, N = 3SE +/- 240.72, N = 3SE +/- 462.80, N = 31023941119821265121155881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyDefault - IBRSretbleed=stuffmitigations=offretbleed=off20K40K60K80K100KMin: 101831.46 / Avg: 102394.24 / Max: 103073.55Min: 110794.11 / Avg: 111982.19 / Max: 113826.63Min: 126196.9 / Avg: 126511.54 / Max: 126984.45Min: 114717.95 / Avg: 115587.98 / Max: 116296.531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off0.27390.54780.82171.09561.3695SE +/- 0.000342, N = 3SE +/- 0.000113, N = 3SE +/- 0.008493, N = 3SE +/- 0.000124, N = 31.2157411.2174031.2074181.2173221. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.22 / Avg: 1.22 / Max: 1.22Min: 1.22 / Avg: 1.22 / Max: 1.22Min: 1.19 / Avg: 1.21 / Max: 1.22Min: 1.22 / Avg: 1.22 / Max: 1.221. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IODefault - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KSE +/- 93.82, N = 3SE +/- 146.02, N = 3SE +/- 28.35, N = 3SE +/- 126.82, N = 310635.99115.48754.19170.9MIN: 10109.53 / MAX: 16338.05MIN: 8944.48 / MAX: 13088.78MIN: 8697.4 / MAX: 12208.1MIN: 8933.81 / MAX: 12964.22
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IODefault - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 10473.13 / Avg: 10635.89 / Max: 10798.14Min: 8944.48 / Avg: 9115.38 / Max: 9405.91Min: 8697.4 / Avg: 8754.09 / Max: 8783.66Min: 8933.81 / Avg: 9170.88 / Max: 9367.5

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Default - IBRSretbleed=stuffmitigations=offretbleed=off0.28170.56340.84511.12681.4085SE +/- 0.005641, N = 3SE +/- 0.000662, N = 3SE +/- 0.000583, N = 3SE +/- 0.000223, N = 31.2419261.2511871.2517961.2517741. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.23 / Avg: 1.24 / Max: 1.25Min: 1.25 / Avg: 1.25 / Max: 1.25Min: 1.25 / Avg: 1.25 / Max: 1.25Min: 1.25 / Avg: 1.25 / Max: 1.251. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Default - IBRSretbleed=stuffmitigations=offretbleed=off0.28950.5790.86851.1581.4475SE +/- 0.000148, N = 3SE +/- 0.000082, N = 3SE +/- 0.017971, N = 3SE +/- 0.000153, N = 31.2852651.2849391.2613111.2867891. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.29 / Avg: 1.29 / Max: 1.29Min: 1.28 / Avg: 1.28 / Max: 1.29Min: 1.23 / Avg: 1.26 / Max: 1.29Min: 1.29 / Avg: 1.29 / Max: 1.291. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesDefault - IBRSretbleed=stuffmitigations=offretbleed=off6001200180024003000SE +/- 47.68, N = 3SE +/- 6.73, N = 3SE +/- 21.98, N = 3SE +/- 4.33, N = 32797.22086.62040.72069.5MIN: 2667.81 / MAX: 2893.34MIN: 2049.58 / MAX: 2124.33MIN: 1981.59 / MAX: 2119.04MIN: 2035.71 / MAX: 2118.84
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesDefault - IBRSretbleed=stuffmitigations=offretbleed=off5001000150020002500Min: 2702.36 / Avg: 2797.16 / Max: 2853.6Min: 2073.14 / Avg: 2086.59 / Max: 2093.68Min: 2012.37 / Avg: 2040.66 / Max: 2083.94Min: 2061.27 / Avg: 2069.46 / Max: 2076

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 65.05, N = 3SE +/- 52.98, N = 3SE +/- 73.08, N = 3SE +/- 24.66, N = 315417.916043.616312.116356.3
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 15294.2 / Avg: 15417.87 / Max: 15514.7Min: 15967.3 / Avg: 16043.57 / Max: 16145.4Min: 16181.4 / Avg: 16312.13 / Max: 16434.1Min: 16319.8 / Avg: 16356.33 / Max: 16403.3

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 58.53, N = 3SE +/- 125.77, N = 3SE +/- 35.50, N = 3SE +/- 106.96, N = 315948.616550.416635.816712.8
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 15832.6 / Avg: 15948.6 / Max: 16020.2Min: 16357 / Avg: 16550.43 / Max: 16786.4Min: 16582.3 / Avg: 16635.83 / Max: 16703Min: 16536.4 / Avg: 16712.8 / Max: 16905.8

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 17.17, N = 3SE +/- 24.56, N = 3SE +/- 31.63, N = 3SE +/- 61.83, N = 313556.114251.314119.514248.2
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 13531.3 / Avg: 13556.13 / Max: 13589.1Min: 14204.8 / Avg: 14251.27 / Max: 14288.3Min: 14056.3 / Avg: 14119.47 / Max: 14154.1Min: 14128 / Avg: 14248.23 / Max: 14333.4

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KSE +/- 39.90, N = 3SE +/- 95.61, N = 3SE +/- 115.39, N = 3SE +/- 19.89, N = 320258.020766.521346.121086.8
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 20184.6 / Avg: 20258.03 / Max: 20321.8Min: 20618.4 / Avg: 20766.5 / Max: 20945.3Min: 21171.3 / Avg: 21346.13 / Max: 21564Min: 21062.5 / Avg: 21086.77 / Max: 21126.2

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 20.31, N = 3SE +/- 67.67, N = 3SE +/- 20.72, N = 3SE +/- 37.60, N = 312182.112888.412714.412830.4
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 12142.7 / Avg: 12182.13 / Max: 12210.3Min: 12754.3 / Avg: 12888.37 / Max: 12971.4Min: 12677.5 / Avg: 12714.37 / Max: 12749.2Min: 12782.6 / Avg: 12830.43 / Max: 12904.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 57.53, N = 3SE +/- 125.50, N = 3SE +/- 40.38, N = 3SE +/- 80.54, N = 316214.117025.617086.417196.3
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 16103.7 / Avg: 16214.1 / Max: 16297.4Min: 16774.7 / Avg: 17025.6 / Max: 17156.9Min: 17011.8 / Avg: 17086.4 / Max: 17150.5Min: 17069.1 / Avg: 17196.3 / Max: 17345.5

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 35.11, N = 3SE +/- 38.67, N = 3SE +/- 67.56, N = 3SE +/- 35.46, N = 315334.716052.616279.616169.5
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 15266.9 / Avg: 15334.67 / Max: 15384.5Min: 15975.9 / Avg: 16052.6 / Max: 16099.5Min: 16146 / Avg: 16279.57 / Max: 16364Min: 16113.7 / Avg: 16169.5 / Max: 16235.3

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KSE +/- 38.97, N = 3SE +/- 67.07, N = 3SE +/- 48.49, N = 3SE +/- 30.70, N = 321820.822382.323274.722888.9
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 256Default - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 21742.9 / Avg: 21820.83 / Max: 21860.4Min: 22275.2 / Avg: 22382.3 / Max: 22505.8Min: 23181.5 / Avg: 23274.67 / Max: 23344.6Min: 22833.4 / Avg: 22888.87 / Max: 22939.4

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 43.99, N = 3SE +/- 29.50, N = 3SE +/- 71.67, N = 3SE +/- 23.00, N = 312263.713101.012637.012798.2
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 12215.3 / Avg: 12263.67 / Max: 12351.5Min: 13048.3 / Avg: 13101.03 / Max: 13150.3Min: 12525.9 / Avg: 12637.03 / Max: 12771Min: 12761.4 / Avg: 12798.2 / Max: 12840.5

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off16003200480064008000SE +/- 55.63, N = 3SE +/- 26.36, N = 3SE +/- 31.51, N = 3SE +/- 47.48, N = 36837.87257.67068.97123.9
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off13002600390052006500Min: 6737.6 / Avg: 6837.8 / Max: 6929.8Min: 7216.6 / Avg: 7257.57 / Max: 7306.8Min: 7008 / Avg: 7068.9 / Max: 7113.4Min: 7043.2 / Avg: 7123.87 / Max: 7207.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 34.01, N = 3SE +/- 31.65, N = 3SE +/- 43.52, N = 3SE +/- 65.34, N = 314206.915153.914750.115102.3
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 14155.9 / Avg: 14206.93 / Max: 14271.4Min: 15090.6 / Avg: 15153.9 / Max: 15185.9Min: 14680.4 / Avg: 14750.13 / Max: 14830.1Min: 14982.7 / Avg: 15102.33 / Max: 15207.7

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KSE +/- 60.10, N = 3SE +/- 96.17, N = 3SE +/- 72.05, N = 3SE +/- 67.44, N = 321848.022736.123352.323189.1
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 21766.6 / Avg: 21848 / Max: 21965.3Min: 22565.4 / Avg: 22736.1 / Max: 22898.2Min: 23209.9 / Avg: 23352.27 / Max: 23442.8Min: 23061.5 / Avg: 23189.07 / Max: 23290.8

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off16003200480064008000SE +/- 2.05, N = 3SE +/- 2.32, N = 3SE +/- 3.51, N = 3SE +/- 3.60, N = 37409.457409.077405.607405.04MIN: 7399.64MIN: 7401.09MIN: 7394.43MIN: 7395.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off13002600390052006500Min: 7405.42 / Avg: 7409.45 / Max: 7412.12Min: 7406.03 / Avg: 7409.07 / Max: 7413.63Min: 7399.21 / Avg: 7405.6 / Max: 7411.3Min: 7400.88 / Avg: 7405.04 / Max: 7412.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 4 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265SE +/- 0.30, N = 3SE +/- 1.00, N = 15SE +/- 0.95, N = 15SE +/- 0.04, N = 359.1741.5223.6533.501. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 4 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off1224364860Min: 58.68 / Avg: 59.17 / Max: 59.73Min: 39.49 / Avg: 41.52 / Max: 50.07Min: 22.1 / Avg: 23.65 / Max: 33.08Min: 33.44 / Avg: 33.5 / Max: 33.591. (CC) gcc options: -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 2 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835SE +/- 0.62, N = 15SE +/- 0.78, N = 15SE +/- 0.90, N = 15SE +/- 0.69, N = 1527.9821.5615.1018.291. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 2 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 25.38 / Avg: 27.98 / Max: 30.55Min: 19.44 / Avg: 21.56 / Max: 25.92Min: 10.95 / Avg: 15.1 / Max: 18.13Min: 16.41 / Avg: 18.29 / Max: 22.311. (CC) gcc options: -lpthread

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.53, N = 3SE +/- 0.29, N = 3SE +/- 0.15, N = 3SE +/- 0.13, N = 3108.1112.0111.6111.6
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 512Default - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 107.1 / Avg: 108.1 / Max: 108.9Min: 111.5 / Avg: 111.97 / Max: 112.5Min: 111.3 / Avg: 111.6 / Max: 111.8Min: 111.5 / Avg: 111.63 / Max: 111.9

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.48, N = 3SE +/- 0.19, N = 3SE +/- 0.75, N = 3SE +/- 1.02, N = 3106.9111.7111.4105.8
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 128Default - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 106 / Avg: 106.93 / Max: 107.6Min: 111.3 / Avg: 111.67 / Max: 111.9Min: 109.9 / Avg: 111.37 / Max: 112.4Min: 103.9 / Avg: 105.77 / Max: 107.4

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0Default - IBRSretbleed=stuffmitigations=offretbleed=off40K80K120K160K200KSE +/- 699.76, N = 3SE +/- 752.57, N = 3SE +/- 833.55, N = 3SE +/- 920.86, N = 3111189.18128725.84195756.15154975.581. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0Default - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KMin: 109827.12 / Avg: 111189.18 / Max: 112148.77Min: 127640.55 / Avg: 128725.84 / Max: 130171.64Min: 194110.79 / Avg: 195756.15 / Max: 196811.21Min: 153160.36 / Avg: 154975.58 / Max: 156152.81. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.29, N = 3SE +/- 0.07, N = 3SE +/- 0.68, N = 3SE +/- 0.75, N = 392.5292.4393.6992.95
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 91.95 / Avg: 92.52 / Max: 92.87Min: 92.28 / Avg: 92.43 / Max: 92.5Min: 92.41 / Avg: 93.69 / Max: 94.72Min: 92.07 / Avg: 92.94 / Max: 94.44

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 4000Default - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KSE +/- 53.13, N = 3SE +/- 174.05, N = 3SE +/- 81.17, N = 3SE +/- 35.92, N = 321430.7122968.3626090.3124122.051. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 4000Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 21325.46 / Avg: 21430.71 / Max: 21495.95Min: 22622.34 / Avg: 22968.36 / Max: 23174.27Min: 25928.25 / Avg: 26090.31 / Max: 26179.52Min: 24085.88 / Avg: 24122.05 / Max: 24193.91. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Default - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KSE +/- 9.32, N = 3SE +/- 16.02, N = 3SE +/- 10.38, N = 3SE +/- 29.95, N = 323838.5325553.0529875.4627412.831. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 23820.17 / Avg: 23838.53 / Max: 23850.54Min: 25528.55 / Avg: 25553.05 / Max: 25583.17Min: 29854.87 / Avg: 29875.46 / Max: 29888.11Min: 27354.66 / Avg: 27412.83 / Max: 27454.281. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Default - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KSE +/- 98.51, N = 3SE +/- 89.46, N = 3SE +/- 181.77, N = 3SE +/- 190.93, N = 321713.7922945.2926356.0824256.311. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 21542.66 / Avg: 21713.79 / Max: 21883.91Min: 22831.95 / Avg: 22945.29 / Max: 23121.86Min: 26116.5 / Avg: 26356.08 / Max: 26712.67Min: 23919.65 / Avg: 24256.31 / Max: 24580.711. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Default - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KSE +/- 13.33, N = 3SE +/- 23.71, N = 3SE +/- 46.53, N = 3SE +/- 22.49, N = 322618.3624113.3327769.2825604.081. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 22595.12 / Avg: 22618.36 / Max: 22641.3Min: 24066.9 / Avg: 24113.33 / Max: 24144.93Min: 27710.94 / Avg: 27769.28 / Max: 27861.25Min: 25564.52 / Avg: 25604.08 / Max: 25642.411. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Default - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KSE +/- 23.99, N = 3SE +/- 26.02, N = 3SE +/- 7.45, N = 3SE +/- 37.50, N = 323460.5124909.1128879.6726593.581. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 23426.79 / Avg: 23460.51 / Max: 23506.94Min: 24858.47 / Avg: 24909.11 / Max: 24944.85Min: 28864.84 / Avg: 28879.67 / Max: 28888.32Min: 26519.15 / Avg: 26593.58 / Max: 26638.761. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 20Default - IBRSretbleed=stuffmitigations=offretbleed=off7K14K21K28K35KSE +/- 6.23, N = 3SE +/- 14.90, N = 3SE +/- 28.27, N = 3SE +/- 19.21, N = 323974.2525852.1730466.9227933.181. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 20Default - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 23961.96 / Avg: 23974.25 / Max: 23982.17Min: 25833.51 / Avg: 25852.17 / Max: 25881.61Min: 30410.72 / Avg: 30466.92 / Max: 30500.37Min: 27905.74 / Avg: 27933.18 / Max: 27970.181. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500SE +/- 1.74, N = 3SE +/- 0.74, N = 3SE +/- 4.77, N = 3SE +/- 1.52, N = 33967.413968.473963.593961.26MIN: 3960.54MIN: 3960.07MIN: 3954MIN: 3956.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500Min: 3964.56 / Avg: 3967.41 / Max: 3970.57Min: 3967.33 / Avg: 3968.47 / Max: 3969.87Min: 3958.07 / Avg: 3963.59 / Max: 3973.08Min: 3959.65 / Avg: 3961.26 / Max: 3964.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.26, N = 15SE +/- 0.28, N = 15SE +/- 0.25, N = 3SE +/- 0.25, N = 1514.1414.0614.9113.54MIN: 11.76MIN: 11.74MIN: 11.77MIN: 11.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 12.49 / Avg: 14.14 / Max: 15.57Min: 12.22 / Avg: 14.06 / Max: 15.88Min: 14.55 / Avg: 14.91 / Max: 15.39Min: 12.42 / Avg: 13.54 / Max: 15.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutDefault - IBRSretbleed=stuffmitigations=offretbleed=off10002000300040005000SE +/- 32.38, N = 3SE +/- 26.79, N = 3SE +/- 12.21, N = 3SE +/- 50.43, N = 44432.34200.63839.94037.2MIN: 3880.98 / MAX: 5099.49MIN: 3663.15 / MAX: 4744.43MIN: 3364.72 / MAX: 4319.78MIN: 3568.99 / MAX: 4384.41
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutDefault - IBRSretbleed=stuffmitigations=offretbleed=off8001600240032004000Min: 4376.73 / Avg: 4432.34 / Max: 4488.89Min: 4169.14 / Avg: 4200.64 / Max: 4253.93Min: 3820.28 / Avg: 3839.93 / Max: 3862.3Min: 3952.48 / Avg: 4037.15 / Max: 4172.15

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 8 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.60, N = 3SE +/- 0.89, N = 3SE +/- 0.06, N = 3SE +/- 0.83, N = 3108.7581.9344.4869.701. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 8 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 107.61 / Avg: 108.75 / Max: 109.67Min: 80.84 / Avg: 81.93 / Max: 83.69Min: 44.37 / Avg: 44.48 / Max: 44.57Min: 68.74 / Avg: 69.7 / Max: 71.351. (CC) gcc options: -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1Default - IBRSretbleed=stuffmitigations=offretbleed=off918273645SE +/- 0.02, N = 3SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 331.4734.8837.3635.84
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1Default - IBRSretbleed=stuffmitigations=offretbleed=off816243240Min: 31.44 / Avg: 31.47 / Max: 31.5Min: 34.68 / Avg: 34.88 / Max: 35.07Min: 37.24 / Avg: 37.36 / Max: 37.49Min: 35.75 / Avg: 35.84 / Max: 35.91

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 8 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.33, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.07, N = 3107.0980.4343.5568.061. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 8 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 106.75 / Avg: 107.09 / Max: 107.76Min: 80.31 / Avg: 80.43 / Max: 80.64Min: 43.33 / Avg: 43.55 / Max: 43.67Min: 67.97 / Avg: 68.06 / Max: 68.191. (CC) gcc options: -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Fill SyncDefault - IBRSretbleed=stuffmitigations=offretbleed=off150300450600750SE +/- 4.73, N = 3SE +/- 8.33, N = 3SE +/- 9.10, N = 4SE +/- 4.06, N = 36746646637011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Fill SyncDefault - IBRSretbleed=stuffmitigations=offretbleed=off120240360480600Min: 667 / Avg: 674 / Max: 683Min: 652 / Avg: 664 / Max: 680Min: 637 / Avg: 662.5 / Max: 680Min: 694 / Avg: 701.33 / Max: 7081. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 10:1Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 16343.00, N = 3SE +/- 14429.89, N = 3SE +/- 13202.94, N = 3SE +/- 17472.63, N = 41291653.161319663.841341587.321329633.771. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 10:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 1258979.54 / Avg: 1291653.16 / Max: 1308769.27Min: 1290889.76 / Avg: 1319663.84 / Max: 1335975.56Min: 1323597.09 / Avg: 1341587.32 / Max: 1367322.1Min: 1287968.8 / Avg: 1329633.77 / Max: 1371064.081. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 18141.35, N = 3SE +/- 9807.81, N = 3SE +/- 20387.26, N = 4SE +/- 18624.65, N = 31360130.931447187.461422407.471410074.301. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KMin: 1331282.53 / Avg: 1360130.93 / Max: 1393611.72Min: 1428061.53 / Avg: 1447187.46 / Max: 1460522.49Min: 1377577.43 / Avg: 1422407.47 / Max: 1469796.49Min: 1375092.05 / Avg: 1410074.3 / Max: 1438647.581. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off11002200330044005500SE +/- 1.25, N = 3SE +/- 1.80, N = 3SE +/- 3.89, N = 3SE +/- 2.51, N = 35354.185337.235313.905325.14MIN: 5048.06 / MAX: 5756.34MIN: 5005.81 / MAX: 5704.35MIN: 4979.6 / MAX: 5629.68MIN: 5015.25 / MAX: 5648.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500Min: 5351.96 / Avg: 5354.18 / Max: 5356.27Min: 5334.22 / Avg: 5337.23 / Max: 5340.45Min: 5309.35 / Avg: 5313.9 / Max: 5321.65Min: 5320.26 / Avg: 5325.14 / Max: 5328.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.16650.3330.49950.6660.8325SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.730.730.740.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.73 / Avg: 0.73 / Max: 0.73Min: 0.73 / Avg: 0.73 / Max: 0.73Min: 0.74 / Avg: 0.74 / Max: 0.74Min: 0.73 / Avg: 0.74 / Max: 0.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500SE +/- 56.91, N = 20SE +/- 44.97, N = 20SE +/- 50.26, N = 20SE +/- 44.62, N = 204195387839273923
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500Min: 3819 / Avg: 4195.05 / Max: 4609Min: 3650 / Avg: 3878.05 / Max: 4357Min: 3575 / Avg: 3926.55 / Max: 4344Min: 3620 / Avg: 3923 / Max: 4144

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off11002200330044005500SE +/- 1.67, N = 3SE +/- 2.07, N = 3SE +/- 2.90, N = 3SE +/- 4.26, N = 35351.085339.565323.795332.52MIN: 5029.61 / MAX: 5757.54MIN: 5008.95 / MAX: 5703.69MIN: 4982.73 / MAX: 5635.02MIN: 5010.51 / MAX: 5668.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500Min: 5348.59 / Avg: 5351.08 / Max: 5354.26Min: 5337.4 / Avg: 5339.56 / Max: 5343.69Min: 5318.45 / Avg: 5323.79 / Max: 5328.42Min: 5326.19 / Avg: 5332.52 / Max: 5340.621. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.16430.32860.49290.65720.8215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.730.730.730.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 0.73 / Avg: 0.73 / Max: 0.73Min: 0.73 / Avg: 0.73 / Max: 0.73Min: 0.73 / Avg: 0.73 / Max: 0.74Min: 0.73 / Avg: 0.73 / Max: 0.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500SE +/- 10.60, N = 3SE +/- 21.15, N = 3SE +/- 7.27, N = 3SE +/- 4.07, N = 33306.203287.933278.213285.23MIN: 3204.79 / MAX: 3381.05MIN: 3168.84 / MAX: 3380.01MIN: 3175.59 / MAX: 3371.08MIN: 3201.26 / MAX: 3383.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off6001200180024003000Min: 3285.25 / Avg: 3306.2 / Max: 3319.53Min: 3253.97 / Avg: 3287.93 / Max: 3326.74Min: 3265.69 / Avg: 3278.21 / Max: 3290.86Min: 3277.21 / Avg: 3285.23 / Max: 3290.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.27230.54460.81691.08921.3615SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.201.211.211.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.2 / Avg: 1.2 / Max: 1.2Min: 1.2 / Avg: 1.21 / Max: 1.22Min: 1.2 / Avg: 1.21 / Max: 1.21Min: 1.2 / Avg: 1.21 / Max: 1.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KSE +/- 4933.91, N = 3SE +/- 13869.80, N = 3SE +/- 9961.71, N = 3SE +/- 6448.91, N = 3865694.31898246.15939884.95928724.151. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1Default - IBRSretbleed=stuffmitigations=offretbleed=off160K320K480K640K800KMin: 860258.85 / Avg: 865694.31 / Max: 875544.51Min: 873170.32 / Avg: 898246.15 / Max: 921056.17Min: 927715.42 / Avg: 939884.95 / Max: 959631.11Min: 915945.19 / Avg: 928724.15 / Max: 936626.491. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KSE +/- 9053.64, N = 3SE +/- 10739.05, N = 3SE +/- 1230.98, N = 3SE +/- 6369.58, N = 3937232.81998481.691037459.901033004.971. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 922771.68 / Avg: 937232.81 / Max: 953900.64Min: 986279.56 / Avg: 998481.69 / Max: 1019890.02Min: 1035480.77 / Avg: 1037459.9 / Max: 1039717.63Min: 1023931.78 / Avg: 1033004.97 / Max: 1045285.761. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KSE +/- 3016.85, N = 3SE +/- 3190.84, N = 3SE +/- 2298.87, N = 3SE +/- 6499.14, N = 3893182.92942396.751004168.04987881.721. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 888164.92 / Avg: 893182.92 / Max: 898593.45Min: 936486.63 / Avg: 942396.75 / Max: 947436.83Min: 1001715.62 / Avg: 1004168.04 / Max: 1008762.28Min: 975803.64 / Avg: 987881.72 / Max: 998080.841. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 19117.51, N = 3SE +/- 7241.76, N = 3SE +/- 11096.23, N = 3SE +/- 18956.48, N = 31372487.301440930.091485496.091499557.641. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KMin: 1334717.24 / Avg: 1372487.3 / Max: 1396520.56Min: 1426635.2 / Avg: 1440930.09 / Max: 1450095.29Min: 1470662.09 / Avg: 1485496.09 / Max: 1507207.95Min: 1462383.83 / Avg: 1499557.64 / Max: 1524596.311. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KSE +/- 3202.61, N = 3SE +/- 9874.35, N = 3SE +/- 14928.63, N = 3SE +/- 2098.57, N = 3902480.411068334.751108896.911085041.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 896221.1 / Avg: 902480.41 / Max: 906787.33Min: 1050477.47 / Avg: 1068334.75 / Max: 1084567.25Min: 1083594.51 / Avg: 1108896.91 / Max: 1135275.2Min: 1081944.34 / Avg: 1085041.59 / Max: 1089043.231. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KSE +/- 14572.82, N = 3SE +/- 10204.79, N = 3SE +/- 1484.63, N = 3SE +/- 4701.88, N = 3844039.71927313.86979021.51966283.041. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 814906.9 / Avg: 844039.71 / Max: 859354.67Min: 910062.24 / Avg: 927313.86 / Max: 945384.22Min: 976391.35 / Avg: 979021.51 / Max: 981529.93Min: 957761.81 / Avg: 966283.04 / Max: 973988.131. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off400800120016002000SE +/- 0.52, N = 3SE +/- 0.78, N = 3SE +/- 0.01, N = 3SE +/- 1.34, N = 31797.131797.931798.301798.51MIN: 1747.57 / MAX: 1809.8MIN: 1733.26 / MAX: 1816.74MIN: 1724.05 / MAX: 1818.82MIN: 1690.1 / MAX: 1919.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off30060090012001500Min: 1796.31 / Avg: 1797.13 / Max: 1798.09Min: 1796.83 / Avg: 1797.93 / Max: 1799.44Min: 1798.28 / Avg: 1798.3 / Max: 1798.32Min: 1796.6 / Avg: 1798.51 / Max: 1801.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.49950.9991.49851.9982.4975SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.222.222.222.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 2.22 / Avg: 2.22 / Max: 2.22Min: 2.22 / Avg: 2.22 / Max: 2.22Min: 2.22 / Avg: 2.22 / Max: 2.22Min: 2.22 / Avg: 2.22 / Max: 2.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.13, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 378.0266.1958.6062.00
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsDefault - IBRSretbleed=stuffmitigations=offretbleed=off1530456075Min: 77.87 / Avg: 78.02 / Max: 78.29Min: 66.05 / Avg: 66.19 / Max: 66.29Min: 58.51 / Avg: 58.6 / Max: 58.69Min: 61.92 / Avg: 62 / Max: 62.09

Memcached

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.16Set To Get Ratio: 1:10Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 1191.60, N = 3SE +/- 1221.65, N = 3SE +/- 2970.13, N = 3SE +/- 1613.00, N = 3823970.81885942.921198373.901057571.991. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.16Set To Get Ratio: 1:10Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 822031.86 / Avg: 823970.81 / Max: 826140.33Min: 883716.4 / Avg: 885942.92 / Max: 887927.52Min: 1194828.76 / Avg: 1198373.9 / Max: 1204274.31Min: 1054559.57 / Avg: 1057571.99 / Max: 1060077.831. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.16Set To Get Ratio: 1:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KSE +/- 543.55, N = 3SE +/- 3880.05, N = 3SE +/- 4312.41, N = 3SE +/- 7025.44, N = 3833888.75891561.781157720.201035956.621. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.16Set To Get Ratio: 1:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 832813.57 / Avg: 833888.75 / Max: 834565.35Min: 884916.3 / Avg: 891561.78 / Max: 898354.77Min: 1149130.26 / Avg: 1157720.2 / Max: 1162686.29Min: 1021930.27 / Avg: 1035956.62 / Max: 1043688.41. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.16Set To Get Ratio: 1:5Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KSE +/- 1256.03, N = 3SE +/- 1962.85, N = 3SE +/- 3325.22, N = 3SE +/- 3239.10, N = 3827558.93883794.561159234.391044494.531. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.16Set To Get Ratio: 1:5Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 825474.73 / Avg: 827558.93 / Max: 829815.5Min: 879870.75 / Avg: 883794.56 / Max: 885861.92Min: 1153685.79 / Avg: 1159234.39 / Max: 1165183.75Min: 1038669.36 / Avg: 1044494.53 / Max: 1049861.861. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.16Set To Get Ratio: 5:1Default - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 1428.58, N = 3SE +/- 1529.09, N = 3SE +/- 882.77, N = 3SE +/- 1635.50, N = 3939991.361053867.931290274.481147643.611. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.16Set To Get Ratio: 5:1Default - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 937975.11 / Avg: 939991.36 / Max: 942752.64Min: 1051744.73 / Avg: 1053867.93 / Max: 1056835.66Min: 1288944.11 / Avg: 1290274.48 / Max: 1291944.87Min: 1144493.05 / Avg: 1147643.61 / Max: 1149980.511. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadDefault - IBRSretbleed=stuffmitigations=offretbleed=off4M8M12M16M20MSE +/- 180456.61, N = 3SE +/- 82599.84, N = 3SE +/- 104196.67, N = 3SE +/- 227211.26, N = 4175590911733183617471957175423851. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadDefault - IBRSretbleed=stuffmitigations=offretbleed=off3M6M9M12M15MMin: 17266557 / Avg: 17559091.33 / Max: 17888420Min: 17177598 / Avg: 17331836.33 / Max: 17460201Min: 17291819 / Avg: 17471956.67 / Max: 17652765Min: 16862120 / Avg: 17542385.25 / Max: 178081781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350SE +/- 1.98, N = 3SE +/- 0.65, N = 3SE +/- 1.48, N = 3SE +/- 1.21, N = 3302.54303.87303.09301.63MIN: 242.68 / MAX: 511.74MIN: 164.49 / MAX: 327.96MIN: 253.91 / MAX: 325.11MIN: 253.21 / MAX: 326.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250Min: 299.01 / Avg: 302.54 / Max: 305.87Min: 303.04 / Avg: 303.87 / Max: 305.16Min: 300.58 / Avg: 303.09 / Max: 305.69Min: 299.76 / Avg: 301.63 / Max: 303.91. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 313.2013.1413.1813.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 13.05 / Avg: 13.2 / Max: 13.35Min: 13.09 / Avg: 13.14 / Max: 13.18Min: 13.06 / Avg: 13.18 / Max: 13.29Min: 13.14 / Avg: 13.24 / Max: 13.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenDefault - IBRSretbleed=stuffmitigations=offretbleed=off918273645393939391. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200SE +/- 0.41, N = 3SE +/- 0.28, N = 3SE +/- 2.19, N = 3SE +/- 1.11, N = 3173.82173.52177.26175.97
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 173.21 / Avg: 173.82 / Max: 174.6Min: 172.97 / Avg: 173.52 / Max: 173.89Min: 173.02 / Avg: 177.26 / Max: 180.3Min: 174.85 / Avg: 175.97 / Max: 178.2

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.15, N = 3SE +/- 0.07, N = 311.5011.5211.2711.36
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 11.45 / Avg: 11.5 / Max: 11.55Min: 11.5 / Avg: 11.52 / Max: 11.56Min: 11.08 / Avg: 11.27 / Max: 11.56Min: 11.22 / Avg: 11.36 / Max: 11.44

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.04, N = 3SE +/- 0.24, N = 3SE +/- 0.22, N = 3SE +/- 0.15, N = 325.3226.0626.5725.22MIN: 12.97 / MAX: 44.7MIN: 12.62 / MAX: 46.95MIN: 12.63 / MAX: 42.23MIN: 13.24 / MAX: 45.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 25.27 / Avg: 25.32 / Max: 25.4Min: 25.59 / Avg: 26.06 / Max: 26.36Min: 26.15 / Avg: 26.57 / Max: 26.87Min: 25.03 / Avg: 25.22 / Max: 25.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200SE +/- 0.26, N = 3SE +/- 1.41, N = 3SE +/- 1.24, N = 3SE +/- 0.93, N = 3157.85153.37150.46158.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 157.34 / Avg: 157.85 / Max: 158.14Min: 151.58 / Avg: 153.37 / Max: 156.16Min: 148.72 / Avg: 150.46 / Max: 152.85Min: 156.69 / Avg: 158.52 / Max: 159.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 329.2729.1429.3429.14MIN: 16.56 / MAX: 45.71MIN: 16.05 / MAX: 46.79MIN: 15.88 / MAX: 47.74MIN: 16.39 / MAX: 45.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835Min: 29.18 / Avg: 29.27 / Max: 29.43Min: 28.96 / Avg: 29.14 / Max: 29.25Min: 29.23 / Avg: 29.34 / Max: 29.41Min: 29.12 / Avg: 29.14 / Max: 29.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.38, N = 3SE +/- 0.43, N = 3SE +/- 0.26, N = 3SE +/- 0.06, N = 3136.56137.20136.24137.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 135.81 / Avg: 136.56 / Max: 137.02Min: 136.67 / Avg: 137.2 / Max: 138.06Min: 135.92 / Avg: 136.24 / Max: 136.75Min: 137.1 / Avg: 137.21 / Max: 137.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off1122334455SE +/- 0.69, N = 3SE +/- 0.51, N = 3SE +/- 0.54, N = 3SE +/- 0.72, N = 346.5046.5346.7847.63MIN: 23.98 / MAX: 69.29MIN: 20.85 / MAX: 70.17MIN: 24.61 / MAX: 68.31MIN: 23.65 / MAX: 68.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off1020304050Min: 45.12 / Avg: 46.5 / Max: 47.35Min: 45.57 / Avg: 46.53 / Max: 47.31Min: 46.13 / Avg: 46.78 / Max: 47.86Min: 46.23 / Avg: 47.63 / Max: 48.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 1.30, N = 3SE +/- 0.95, N = 3SE +/- 0.97, N = 3SE +/- 1.28, N = 386.0285.9585.4783.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off1632486480Min: 84.43 / Avg: 86.02 / Max: 88.6Min: 84.51 / Avg: 85.95 / Max: 87.74Min: 83.54 / Avg: 85.47 / Max: 86.66Min: 82.25 / Avg: 83.97 / Max: 86.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 335.4035.3835.3835.41MIN: 29.04 / MAX: 52.04MIN: 25.81 / MAX: 52.06MIN: 27.55 / MAX: 52.33MIN: 25.78 / MAX: 52.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240Min: 35.3 / Avg: 35.37 / Max: 35.4Min: 35.31 / Avg: 35.38 / Max: 35.44Min: 35.35 / Avg: 35.38 / Max: 35.43Min: 35.38 / Avg: 35.41 / Max: 35.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3113.01112.97113.00112.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 112.9 / Avg: 113.01 / Max: 113.23Min: 112.78 / Avg: 112.97 / Max: 113.19Min: 112.86 / Avg: 113 / Max: 113.08Min: 112.79 / Avg: 112.88 / Max: 112.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 318.2818.2718.2618.29MIN: 10 / MAX: 37.03MIN: 9.77 / MAX: 32.89MIN: 9.82 / MAX: 32.91MIN: 9.75 / MAX: 33.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025Min: 18.27 / Avg: 18.28 / Max: 18.29Min: 18.27 / Avg: 18.27 / Max: 18.27Min: 18.26 / Avg: 18.26 / Max: 18.26Min: 18.26 / Avg: 18.29 / Max: 18.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.14, N = 3218.59218.69218.83218.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200Min: 218.48 / Avg: 218.59 / Max: 218.67Min: 218.66 / Avg: 218.69 / Max: 218.73Min: 218.78 / Avg: 218.83 / Max: 218.86Min: 218.42 / Avg: 218.56 / Max: 218.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlDefault - IBRSretbleed=stuffmitigations=offretbleed=off3060901201501381391401401. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingDefault - IBRSretbleed=stuffmitigations=offretbleed=off160K320K480K640K800KSE +/- 10015.04, N = 3SE +/- 10552.54, N = 3SE +/- 4523.86, N = 3SE +/- 11857.59, N = 37216047590047354617376681. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingDefault - IBRSretbleed=stuffmitigations=offretbleed=off130K260K390K520K650KMin: 704010 / Avg: 721604.33 / Max: 738692Min: 744536 / Avg: 759004 / Max: 779545Min: 726986 / Avg: 735461 / Max: 742442Min: 716259 / Avg: 737668 / Max: 7572071. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100818282821. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100991021041031. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random FillDefault - IBRSretbleed=stuffmitigations=offretbleed=off110K220K330K440K550KSE +/- 4120.62, N = 3SE +/- 4573.03, N = 3SE +/- 5841.02, N = 3SE +/- 5715.80, N = 33761073897534929004568791. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random FillDefault - IBRSretbleed=stuffmitigations=offretbleed=off90K180K270K360K450KMin: 367921 / Avg: 376106.67 / Max: 381027Min: 383177 / Avg: 389753 / Max: 398546Min: 484987 / Avg: 492900 / Max: 504299Min: 446545 / Avg: 456878.67 / Max: 4662791. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.2970.5940.8911.1881.485SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.321.301.291.29MIN: 0.77 / MAX: 18.67MIN: 0.74 / MAX: 17.31MIN: 0.71 / MAX: 19.3MIN: 0.76 / MAX: 19.61. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.32 / Avg: 1.32 / Max: 1.32Min: 1.3 / Avg: 1.3 / Max: 1.3Min: 1.29 / Avg: 1.29 / Max: 1.3Min: 1.28 / Avg: 1.29 / Max: 1.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500SE +/- 4.13, N = 3SE +/- 3.18, N = 3SE +/- 1.99, N = 3SE +/- 19.88, N = 32968.923031.903056.033051.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off5001000150020002500Min: 2963.06 / Avg: 2968.92 / Max: 2976.9Min: 3026.37 / Avg: 3031.9 / Max: 3037.37Min: 3052.04 / Avg: 3056.03 / Max: 3058.13Min: 3031.42 / Avg: 3051.91 / Max: 3091.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.3330.6660.9991.3321.665SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.481.461.471.45MIN: 0.93 / MAX: 20.4MIN: 0.89 / MAX: 21.29MIN: 0.88 / MAX: 20.32MIN: 0.83 / MAX: 20.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.47 / Avg: 1.48 / Max: 1.48Min: 1.46 / Avg: 1.46 / Max: 1.47Min: 1.46 / Avg: 1.47 / Max: 1.48Min: 1.45 / Avg: 1.45 / Max: 1.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off6001200180024003000SE +/- 6.84, N = 3SE +/- 3.22, N = 3SE +/- 8.23, N = 3SE +/- 7.59, N = 32662.482703.222689.532722.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off5001000150020002500Min: 2653.27 / Avg: 2662.48 / Max: 2675.85Min: 2697.61 / Avg: 2703.22 / Max: 2708.78Min: 2679.36 / Avg: 2689.53 / Max: 2705.83Min: 2707.52 / Avg: 2722.58 / Max: 2731.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomDefault - IBRSretbleed=stuffmitigations=offretbleed=off50K100K150K200K250KSE +/- 1397.10, N = 3SE +/- 160.82, N = 3SE +/- 2189.34, N = 3SE +/- 440.67, N = 31980892181062522502281941. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomDefault - IBRSretbleed=stuffmitigations=offretbleed=off40K80K120K160K200KMin: 195340 / Avg: 198089 / Max: 199897Min: 217928 / Avg: 218106 / Max: 218427Min: 247989 / Avg: 252250.33 / Max: 255253Min: 227354 / Avg: 228194 / Max: 2288451. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomDefault - IBRSretbleed=stuffmitigations=offretbleed=off160K320K480K640K800KSE +/- 2106.07, N = 3SE +/- 4335.60, N = 3SE +/- 8337.88, N = 3SE +/- 4847.99, N = 36405616939107508177121511. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomDefault - IBRSretbleed=stuffmitigations=offretbleed=off130K260K390K520K650KMin: 636377 / Avg: 640561 / Max: 643074Min: 686839 / Avg: 693909.67 / Max: 701792Min: 734573 / Avg: 750817 / Max: 762204Min: 706178 / Avg: 712151 / Max: 7217521. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingDefault - IBRSretbleed=stuffmitigations=offretbleed=off90180270360450SE +/- 0.88, N = 3SE +/- 1.00, N = 33984034114101. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350Min: 401 / Avg: 402.67 / Max: 404Min: 408 / Avg: 410 / Max: 4111. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateDefault - IBRSretbleed=stuffmitigations=offretbleed=off150300450600750SE +/- 0.58, N = 3SE +/- 0.88, N = 3SE +/- 1.86, N = 35206157176541. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateDefault - IBRSretbleed=stuffmitigations=offretbleed=off130260390520650Min: 614 / Avg: 615 / Max: 616Min: 716 / Avg: 717.33 / Max: 719Min: 650 / Avg: 653.67 / Max: 6561. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceDefault - IBRSretbleed=stuffmitigations=offretbleed=off130260390520650SE +/- 0.33, N = 3SE +/- 0.88, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 34475185855511. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceDefault - IBRSretbleed=stuffmitigations=offretbleed=off100200300400500Min: 447 / Avg: 447.33 / Max: 448Min: 516 / Avg: 517.67 / Max: 519Min: 584 / Avg: 584.67 / Max: 586Min: 550 / Avg: 551 / Max: 5521. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.30, N = 3SE +/- 0.30, N = 3SE +/- 0.12, N = 3SE +/- 0.30, N = 396.5195.8496.3895.68
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 96.06 / Avg: 96.51 / Max: 97.08Min: 95.25 / Avg: 95.84 / Max: 96.25Min: 96.2 / Avg: 96.38 / Max: 96.61Min: 95.34 / Avg: 95.68 / Max: 96.27

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 310.3610.4310.3710.45
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 10.3 / Avg: 10.36 / Max: 10.41Min: 10.39 / Avg: 10.43 / Max: 10.5Min: 10.35 / Avg: 10.37 / Max: 10.39Min: 10.39 / Avg: 10.45 / Max: 10.49

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceDefault - IBRSretbleed=stuffmitigations=offretbleed=off13002600390052006500SE +/- 14.33, N = 3SE +/- 22.00, N = 3SE +/- 46.33, N = 332754076585948071. (CC) gcc options: -O3
OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceDefault - IBRSretbleed=stuffmitigations=offretbleed=off10002000300040005000Min: 3246 / Avg: 3274.67 / Max: 3289Min: 4032 / Avg: 4076 / Max: 4098Min: 5813 / Avg: 5859.33 / Max: 59521. (CC) gcc options: -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.09450.1890.28350.3780.4725SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.420.420.420.421. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionDefault - IBRSretbleed=stuffmitigations=offretbleed=off12345Min: 0.41 / Avg: 0.42 / Max: 0.42Min: 0.42 / Avg: 0.42 / Max: 0.42Min: 0.42 / Avg: 0.42 / Max: 0.42Min: 0.42 / Avg: 0.42 / Max: 0.421. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsDefault - IBRSretbleed=stuffmitigations=offretbleed=off1530456075SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 366.9556.7850.3753.18
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 6 kbpsDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265Min: 66.9 / Avg: 66.95 / Max: 67.04Min: 56.7 / Avg: 56.78 / Max: 56.92Min: 50.26 / Avg: 50.37 / Max: 50.53Min: 53.14 / Avg: 53.18 / Max: 53.22

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsDefault - IBRSretbleed=stuffmitigations=offretbleed=off1530456075SE +/- 0.25, N = 3SE +/- 0.61, N = 3SE +/- 0.56, N = 3SE +/- 0.43, N = 365.3755.9849.6552.56
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265Min: 65.06 / Avg: 65.37 / Max: 65.86Min: 55.21 / Avg: 55.98 / Max: 57.18Min: 48.79 / Avg: 49.64 / Max: 50.7Min: 52 / Avg: 52.56 / Max: 53.41

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off140280420560700SE +/- 0.48, N = 3SE +/- 1.07, N = 3SE +/- 1.60, N = 3SE +/- 2.30, N = 3626.17632.38631.59631.70
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off110220330440550Min: 625.44 / Avg: 626.17 / Max: 627.07Min: 631.02 / Avg: 632.38 / Max: 634.48Min: 628.86 / Avg: 631.59 / Max: 634.39Min: 629.05 / Avg: 631.7 / Max: 636.28

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.71861.43722.15582.87443.593SE +/- 0.0024, N = 3SE +/- 0.0058, N = 3SE +/- 0.0080, N = 3SE +/- 0.0115, N = 33.19393.16213.16663.1660
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.19 / Avg: 3.19 / Max: 3.2Min: 3.15 / Avg: 3.16 / Max: 3.17Min: 3.15 / Avg: 3.17 / Max: 3.18Min: 3.14 / Avg: 3.17 / Max: 3.18

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off140280420560700SE +/- 0.17, N = 3SE +/- 0.64, N = 3SE +/- 0.59, N = 3SE +/- 0.72, N = 3625.93624.35626.49626.06
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off110220330440550Min: 625.69 / Avg: 625.93 / Max: 626.25Min: 623.67 / Avg: 624.35 / Max: 625.63Min: 625.32 / Avg: 626.49 / Max: 627.27Min: 624.73 / Avg: 626.06 / Max: 627.19

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.72071.44142.16212.88283.6035SE +/- 0.0008, N = 3SE +/- 0.0033, N = 3SE +/- 0.0031, N = 3SE +/- 0.0037, N = 33.19513.20323.19233.1945
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.19 / Avg: 3.2 / Max: 3.2Min: 3.2 / Avg: 3.2 / Max: 3.21Min: 3.19 / Avg: 3.19 / Max: 3.2Min: 3.19 / Avg: 3.19 / Max: 3.2

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsDefault - IBRSretbleed=stuffmitigations=offretbleed=off1428425670SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 364.1654.6648.2651.01
OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 1.5 kbpsDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265Min: 64.05 / Avg: 64.16 / Max: 64.22Min: 54.49 / Avg: 54.66 / Max: 54.85Min: 48.12 / Avg: 48.26 / Max: 48.4Min: 50.95 / Avg: 51.01 / Max: 51.08

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.09, N = 3SE +/- 0.25, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3143.09143.74143.46143.41
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 142.92 / Avg: 143.09 / Max: 143.21Min: 143.31 / Avg: 143.74 / Max: 144.17Min: 143.36 / Avg: 143.46 / Max: 143.59Min: 143.25 / Avg: 143.41 / Max: 143.51

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 313.9813.9113.9413.94
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 13.96 / Avg: 13.98 / Max: 13.99Min: 13.87 / Avg: 13.91 / Max: 13.95Min: 13.93 / Avg: 13.94 / Max: 13.95Min: 13.93 / Avg: 13.94 / Max: 13.96

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500SE +/- 7.61, N = 3SE +/- 19.11, N = 3SE +/- 36.43, N = 3SE +/- 11.33, N = 34310.64122.23809.83966.8MIN: 3653.96 / MAX: 6515.95MIN: 3496.19 / MAX: 6318.23MIN: 3207.35 / MAX: 5734.05MIN: 3285.61 / MAX: 5723.05
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500Min: 4297.64 / Avg: 4310.56 / Max: 4323.99Min: 4097.24 / Avg: 4122.19 / Max: 4159.73Min: 3747.09 / Avg: 3809.84 / Max: 3873.28Min: 3954.13 / Avg: 3966.82 / Max: 3989.42

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.31, N = 3SE +/- 0.31, N = 4SE +/- 0.27, N = 8SE +/- 0.29, N = 620.3922.0526.8824.511. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 19.77 / Avg: 20.39 / Max: 20.7Min: 21.19 / Avg: 22.05 / Max: 22.62Min: 25.24 / Avg: 26.88 / Max: 27.61Min: 23.08 / Avg: 24.51 / Max: 25.041. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350SE +/- 0.47, N = 3SE +/- 0.71, N = 3SE +/- 1.16, N = 3SE +/- 0.71, N = 3322.63323.13325.92326.19
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300Min: 322.03 / Avg: 322.63 / Max: 323.55Min: 322.01 / Avg: 323.13 / Max: 324.45Min: 324.62 / Avg: 325.92 / Max: 328.25Min: 324.78 / Avg: 326.19 / Max: 327.09

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.69741.39482.09222.78963.487SE +/- 0.0045, N = 3SE +/- 0.0068, N = 3SE +/- 0.0109, N = 3SE +/- 0.0067, N = 33.09943.09463.06823.0656
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.09 / Avg: 3.1 / Max: 3.11Min: 3.08 / Avg: 3.09 / Max: 3.11Min: 3.05 / Avg: 3.07 / Max: 3.08Min: 3.06 / Avg: 3.07 / Max: 3.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350SE +/- 0.18, N = 3SE +/- 0.58, N = 3SE +/- 0.20, N = 3SE +/- 0.87, N = 3324.41328.24326.93326.54
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300Min: 324.06 / Avg: 324.41 / Max: 324.62Min: 327.09 / Avg: 328.24 / Max: 328.99Min: 326.7 / Avg: 326.93 / Max: 327.33Min: 325.18 / Avg: 326.54 / Max: 328.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.69351.3872.08052.7743.4675SE +/- 0.0017, N = 3SE +/- 0.0054, N = 3SE +/- 0.0019, N = 3SE +/- 0.0082, N = 33.08243.04643.05873.0623
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.08 / Avg: 3.08 / Max: 3.09Min: 3.04 / Avg: 3.05 / Max: 3.06Min: 3.05 / Avg: 3.06 / Max: 3.06Min: 3.05 / Avg: 3.06 / Max: 3.08

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.61, N = 3SE +/- 0.13, N = 3SE +/- 0.46, N = 3SE +/- 0.66, N = 3115.9115.8115.9115.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 114.8 / Avg: 115.87 / Max: 116.9Min: 115.5 / Avg: 115.77 / Max: 115.9Min: 115.1 / Avg: 115.9 / Max: 116.7Min: 113.9 / Avg: 115.17 / Max: 116.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400SE +/- 0.99, N = 3SE +/- 0.30, N = 3SE +/- 0.12, N = 3SE +/- 0.19, N = 3354.4353.7353.6354.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300Min: 352.4 / Avg: 354.37 / Max: 355.6Min: 353.3 / Avg: 353.73 / Max: 354.3Min: 353.4 / Avg: 353.57 / Max: 353.8Min: 354.3 / Avg: 354.67 / Max: 354.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

Selenium

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500SE +/- 10.17, N = 3SE +/- 3.46, N = 334383319323732891. firefox 108.0
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off6001200180024003000Min: 3418 / Avg: 3438.33 / Max: 3449Min: 3231 / Avg: 3237 / Max: 32431. firefox 108.0

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 381.7682.0282.0082.04
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off1632486480Min: 81.65 / Avg: 81.76 / Max: 81.91Min: 81.93 / Avg: 82.02 / Max: 82.1Min: 81.8 / Avg: 82 / Max: 82.31Min: 81.92 / Avg: 82.04 / Max: 82.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 312.2312.1912.1912.19
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 12.21 / Avg: 12.23 / Max: 12.25Min: 12.18 / Avg: 12.19 / Max: 12.2Min: 12.15 / Avg: 12.19 / Max: 12.22Min: 12.17 / Avg: 12.19 / Max: 12.21

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3103.43114.19124.81120.77
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 103.38 / Avg: 103.43 / Max: 103.47Min: 114.15 / Avg: 114.19 / Max: 114.22Min: 124.76 / Avg: 124.81 / Max: 124.87Min: 120.74 / Avg: 120.77 / Max: 120.79

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 312.8012.9212.9812.971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 12.7 / Avg: 12.8 / Max: 12.86Min: 12.91 / Avg: 12.92 / Max: 12.95Min: 12.96 / Avg: 12.98 / Max: 13Min: 12.95 / Avg: 12.97 / Max: 12.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestDefault - IBRSretbleed=stuffmitigations=offretbleed=off20M40M60M80M100MSE +/- 317979.73, N = 3SE +/- 450924.98, N = 3SE +/- 348010.22, N = 3SE +/- 409606.86, N = 31023666671024000001023666671021333331. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestDefault - IBRSretbleed=stuffmitigations=offretbleed=off20M40M60M80M100MMin: 102000000 / Avg: 102366666.67 / Max: 103000000Min: 101900000 / Avg: 102400000 / Max: 103300000Min: 101800000 / Avg: 102366666.67 / Max: 103000000Min: 101500000 / Avg: 102133333.33 / Max: 1029000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000SE +/- 2.77, N = 3SE +/- 3.59, N = 3SE +/- 1.51, N = 3SE +/- 4.60, N = 3849.9834.5819.7825.6MIN: 707.65 / MAX: 1223.4MIN: 712.17 / MAX: 1235.53MIN: 700.08 / MAX: 1221.27MIN: 699.38 / MAX: 1198.62
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestDefault - IBRSretbleed=stuffmitigations=offretbleed=off150300450600750Min: 845.68 / Avg: 849.94 / Max: 855.13Min: 829.2 / Avg: 834.49 / Max: 841.34Min: 816.82 / Avg: 819.67 / Max: 821.97Min: 816.45 / Avg: 825.62 / Max: 830.73

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.85231.70462.55693.40924.2615SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.007, N = 3SE +/- 0.003, N = 33.7643.7763.7883.7811. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.76 / Avg: 3.76 / Max: 3.77Min: 3.76 / Avg: 3.78 / Max: 3.79Min: 3.78 / Avg: 3.79 / Max: 3.8Min: 3.78 / Avg: 3.78 / Max: 3.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.46, N = 3SE +/- 0.70, N = 3SE +/- 0.15, N = 3SE +/- 0.13, N = 3106.2105.4105.7105.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 105.3 / Avg: 106.2 / Max: 106.8Min: 104 / Avg: 105.4 / Max: 106.2Min: 105.5 / Avg: 105.7 / Max: 106Min: 105.3 / Avg: 105.43 / Max: 105.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350SE +/- 0.55, N = 3SE +/- 0.26, N = 3SE +/- 0.19, N = 3SE +/- 1.24, N = 3323.0321.9322.7320.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300Min: 321.9 / Avg: 322.97 / Max: 323.7Min: 321.4 / Avg: 321.9 / Max: 322.3Min: 322.5 / Avg: 322.73 / Max: 323.1Min: 318.6 / Avg: 320.77 / Max: 322.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 176.67, N = 3SE +/- 3.33, N = 3SE +/- 193.10, N = 8192671909719299191001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteDefault - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 19090 / Avg: 19266.67 / Max: 19620Min: 19090 / Avg: 19096.67 / Max: 19100Min: 19100 / Avg: 19298.75 / Max: 206501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Default - IBRSretbleed=stuffmitigations=offretbleed=off8001600240032004000SE +/- 55.09, N = 20SE +/- 47.29, N = 4SE +/- 42.46, N = 20SE +/- 65.06, N = 163763353836303686
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Default - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500Min: 3400 / Avg: 3763.4 / Max: 4359Min: 3434 / Avg: 3537.75 / Max: 3652Min: 3281 / Avg: 3630.35 / Max: 4036Min: 3292 / Avg: 3686.25 / Max: 4275

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off1632486480SE +/- 0.01, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.29, N = 372.5972.9772.9373.31
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off1428425670Min: 72.57 / Avg: 72.59 / Max: 72.61Min: 72.84 / Avg: 72.97 / Max: 73.13Min: 72.83 / Avg: 72.93 / Max: 72.98Min: 72.94 / Avg: 73.31 / Max: 73.88

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.11, N = 327.5427.4027.4127.27
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 27.54 / Avg: 27.54 / Max: 27.55Min: 27.33 / Avg: 27.4 / Max: 27.45Min: 27.4 / Avg: 27.41 / Max: 27.45Min: 27.05 / Avg: 27.27 / Max: 27.41

Sockperf

This is a network socket API performance benchmark developed by Mellanox. This test profile runs both the client and server on the local host for evaluating individual system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.7Test: ThroughputDefault - IBRSretbleed=stuffmitigations=offretbleed=off150K300K450K600K750KSE +/- 3153.77, N = 5SE +/- 3430.46, N = 25SE +/- 8501.97, N = 25SE +/- 6503.42, N = 53382904025266909765333131. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.7Test: ThroughputDefault - IBRSretbleed=stuffmitigations=offretbleed=off120K240K360K480K600KMin: 328823 / Avg: 338289.8 / Max: 345737Min: 343935 / Avg: 402525.76 / Max: 413564Min: 535789 / Avg: 690975.68 / Max: 718227Min: 508207 / Avg: 533312.8 / Max: 5445511. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000SE +/- 8.53, N = 3SE +/- 9.22, N = 7SE +/- 9.06, N = 3SE +/- 12.29, N = 3850.74834.84834.50843.091. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingDefault - IBRSretbleed=stuffmitigations=offretbleed=off150300450600750Min: 838.63 / Avg: 850.74 / Max: 867.2Min: 807.08 / Avg: 834.84 / Max: 876.86Min: 817.94 / Avg: 834.5 / Max: 849.15Min: 827.48 / Avg: 843.09 / Max: 867.341. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000SE +/- 6.22, N = 3SE +/- 4.41, N = 3SE +/- 9.06, N = 7SE +/- 4.63, N = 3830.67822.67827.69820.721. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off150300450600750Min: 824.11 / Avg: 830.67 / Max: 843.11Min: 817.78 / Avg: 822.67 / Max: 831.47Min: 811.73 / Avg: 827.69 / Max: 880.91Min: 811.48 / Avg: 820.72 / Max: 825.931. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off1020304050SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 341.3541.6741.6341.66
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off918273645Min: 41.33 / Avg: 41.35 / Max: 41.37Min: 41.61 / Avg: 41.67 / Max: 41.7Min: 41.6 / Avg: 41.63 / Max: 41.69Min: 41.59 / Avg: 41.66 / Max: 41.71

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 324.1823.9924.0224.00
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 24.17 / Avg: 24.18 / Max: 24.19Min: 23.97 / Avg: 23.99 / Max: 24.02Min: 23.98 / Avg: 24.02 / Max: 24.04Min: 23.97 / Avg: 24 / Max: 24.04

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.13, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3114.35114.88114.96115.13
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 114.21 / Avg: 114.35 / Max: 114.62Min: 114.76 / Avg: 114.88 / Max: 114.97Min: 114.9 / Avg: 114.96 / Max: 115.06Min: 115.08 / Avg: 115.13 / Max: 115.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 317.4917.4117.3917.37
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 17.45 / Avg: 17.49 / Max: 17.51Min: 17.39 / Avg: 17.41 / Max: 17.42Min: 17.38 / Avg: 17.39 / Max: 17.4Min: 17.36 / Avg: 17.37 / Max: 17.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 357.3859.6259.6659.81
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off1224364860Min: 57.36 / Avg: 57.38 / Max: 57.39Min: 59.56 / Avg: 59.62 / Max: 59.66Min: 59.53 / Avg: 59.66 / Max: 59.76Min: 59.67 / Avg: 59.81 / Max: 59.97

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 317.4216.7716.7616.72
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 17.42 / Avg: 17.42 / Max: 17.43Min: 16.76 / Avg: 16.77 / Max: 16.78Min: 16.73 / Avg: 16.76 / Max: 16.79Min: 16.67 / Avg: 16.72 / Max: 16.76

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off1224364860SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 354.5654.4254.4154.39
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off1122334455Min: 54.52 / Avg: 54.56 / Max: 54.6Min: 54.39 / Avg: 54.42 / Max: 54.44Min: 54.38 / Avg: 54.41 / Max: 54.46Min: 54.36 / Avg: 54.39 / Max: 54.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 336.6436.7436.7436.76
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240Min: 36.62 / Avg: 36.64 / Max: 36.67Min: 36.72 / Avg: 36.74 / Max: 36.76Min: 36.71 / Avg: 36.74 / Max: 36.76Min: 36.74 / Avg: 36.76 / Max: 36.78

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 328.7228.6228.6128.61
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 28.71 / Avg: 28.72 / Max: 28.72Min: 28.59 / Avg: 28.62 / Max: 28.66Min: 28.59 / Avg: 28.61 / Max: 28.65Min: 28.6 / Avg: 28.61 / Max: 28.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 334.8134.9234.9334.93
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835Min: 34.8 / Avg: 34.81 / Max: 34.82Min: 34.87 / Avg: 34.92 / Max: 34.96Min: 34.88 / Avg: 34.93 / Max: 34.96Min: 34.92 / Avg: 34.93 / Max: 34.95

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 4 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off1224364860SE +/- 0.49, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 352.2339.0921.6533.161. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 4 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off1020304050Min: 51.62 / Avg: 52.23 / Max: 53.2Min: 38.94 / Avg: 39.09 / Max: 39.21Min: 21.56 / Avg: 21.65 / Max: 21.84Min: 33.12 / Avg: 33.16 / Max: 33.21. (CC) gcc options: -lpthread

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingDefault - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KSE +/- 57.42, N = 3SE +/- 55.21, N = 3SE +/- 132.11, N = 3SE +/- 7.81, N = 3221692216022027221141. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 22055 / Avg: 22169 / Max: 22238Min: 22087 / Avg: 22159.67 / Max: 22268Min: 21830 / Avg: 22027 / Max: 22278Min: 22100 / Avg: 22114 / Max: 221271. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingDefault - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KSE +/- 221.34, N = 3SE +/- 116.09, N = 3SE +/- 44.85, N = 3SE +/- 244.13, N = 3280962850729055289531. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingDefault - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 27665 / Avg: 28096 / Max: 28399Min: 28275 / Avg: 28507 / Max: 28631Min: 28997 / Avg: 29054.67 / Max: 29143Min: 28589 / Avg: 28953.33 / Max: 294171. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025SE +/- 0.05, N = 5SE +/- 0.06, N = 5SE +/- 0.13, N = 5SE +/- 0.14, N = 521.6621.4421.3921.461. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025Min: 21.5 / Avg: 21.66 / Max: 21.82Min: 21.33 / Avg: 21.44 / Max: 21.68Min: 21.12 / Avg: 21.39 / Max: 21.89Min: 21.19 / Avg: 21.46 / Max: 21.941. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Selenium

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off30060090012001500SE +/- 3.24, N = 3SE +/- 5.53, N = 3SE +/- 4.04, N = 3SE +/- 2.56, N = 31250.01209.81185.11199.51. firefox 108.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000Min: 1244.7 / Avg: 1250.03 / Max: 1255.9Min: 1199 / Avg: 1209.83 / Max: 1217.2Min: 1177.5 / Avg: 1185.07 / Max: 1191.3Min: 1196.5 / Avg: 1199.5 / Max: 1204.61. firefox 108.0

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.03, N = 3SE +/- 1.28, N = 5SE +/- 0.17, N = 3SE +/- 0.17, N = 397.9299.4098.2498.311. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 97.86 / Avg: 97.92 / Max: 97.96Min: 98.08 / Avg: 99.4 / Max: 104.52Min: 97.92 / Avg: 98.24 / Max: 98.47Min: 97.98 / Avg: 98.31 / Max: 98.541. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 5Default - IBRSretbleed=stuffmitigations=offretbleed=off0.40910.81821.22731.63642.0455SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 3SE +/- 0.006, N = 31.8021.7991.8021.818
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 5Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.8 / Avg: 1.8 / Max: 1.81Min: 1.79 / Avg: 1.8 / Max: 1.81Min: 1.8 / Avg: 1.8 / Max: 1.81Min: 1.81 / Avg: 1.82 / Max: 1.83

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 150.56, N = 3SE +/- 237.67, N = 4SE +/- 258.64, N = 3SE +/- 44.22, N = 318152.9617799.1418017.3018087.721. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathDefault - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 17967.2 / Avg: 18152.96 / Max: 18451.07Min: 17322.78 / Avg: 17799.14 / Max: 18362.12Min: 17606.65 / Avg: 18017.3 / Max: 18495.05Min: 18014.19 / Avg: 18087.72 / Max: 18167.051. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandDefault - IBRSretbleed=stuffmitigations=offretbleed=off40K80K120K160K200KSE +/- 25.86, N = 3SE +/- 39.20, N = 3SE +/- 2.25, N = 3SE +/- 24.86, N = 34462.864492.76194722.994482.551. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: x86_64 RdRandDefault - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KMin: 4434.5 / Avg: 4462.86 / Max: 4514.49Min: 4418.08 / Avg: 4492.76 / Max: 4550.78Min: 194718.56 / Avg: 194722.99 / Max: 194725.89Min: 4433.21 / Avg: 4482.55 / Max: 4512.591. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingDefault - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 2033.85, N = 3SE +/- 679.86, N = 3SE +/- 1094.09, N = 3SE +/- 1335.54, N = 3924510.281035968.531563150.011249349.811. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context SwitchingDefault - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KMin: 921256.21 / Avg: 924510.28 / Max: 928251.05Min: 1034861.29 / Avg: 1035968.53 / Max: 1037205.63Min: 1561068.42 / Avg: 1563150.01 / Max: 1564775.04Min: 1246707.64 / Avg: 1249349.81 / Max: 1251010.261. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMADefault - IBRSretbleed=stuffmitigations=offretbleed=off1632486480SE +/- 0.26, N = 3SE +/- 0.44, N = 3SE +/- 0.60, N = 3SE +/- 0.62, N = 363.5860.3469.7868.261. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMADefault - IBRSretbleed=stuffmitigations=offretbleed=off1428425670Min: 63.27 / Avg: 63.58 / Max: 64.1Min: 59.62 / Avg: 60.34 / Max: 61.15Min: 68.68 / Avg: 69.78 / Max: 70.76Min: 67.29 / Avg: 68.26 / Max: 69.41. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocDefault - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KSE +/- 2251.44, N = 3SE +/- 3175.67, N = 3SE +/- 5092.34, N = 3SE +/- 8981.65, N = 3388441.98603448.111165429.65682715.711. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocDefault - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 386164.08 / Avg: 388441.98 / Max: 392944.75Min: 597953.14 / Avg: 603448.11 / Max: 608953.96Min: 1155517.96 / Avg: 1165429.65 / Max: 1172413.91Min: 665383.15 / Avg: 682715.71 / Max: 695468.141. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityDefault - IBRSretbleed=stuffmitigations=offretbleed=off10002000300040005000SE +/- 4.43, N = 3SE +/- 0.53, N = 3SE +/- 2.42, N = 3SE +/- 3.05, N = 32586.503422.604834.073720.581. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityDefault - IBRSretbleed=stuffmitigations=offretbleed=off8001600240032004000Min: 2577.85 / Avg: 2586.5 / Max: 2592.47Min: 3421.63 / Avg: 3422.6 / Max: 3423.47Min: 4830.97 / Avg: 4834.07 / Max: 4838.83Min: 3716.32 / Avg: 3720.58 / Max: 3726.51. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 1.50, N = 3SE +/- 0.99, N = 3SE +/- 1.18, N = 3SE +/- 2.38, N = 3112.42106.29114.07151.511. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 109.74 / Avg: 112.42 / Max: 114.93Min: 105.24 / Avg: 106.29 / Max: 108.28Min: 112.55 / Avg: 114.07 / Max: 116.39Min: 147.98 / Avg: 151.51 / Max: 156.051. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsDefault - IBRSretbleed=stuffmitigations=offretbleed=off50K100K150K200K250KSE +/- 443.77, N = 3SE +/- 612.75, N = 3SE +/- 1033.29, N = 3SE +/- 488.89, N = 3243451.44243636.88240956.56243127.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsDefault - IBRSretbleed=stuffmitigations=offretbleed=off40K80K120K160K200KMin: 242784.7 / Avg: 243451.44 / Max: 244292.14Min: 242447.73 / Avg: 243636.88 / Max: 244488.04Min: 239190.18 / Avg: 240956.56 / Max: 242768.73Min: 242233.25 / Avg: 243127.33 / Max: 243917.181. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.4M2.8M4.2M5.6M7MSE +/- 521.34, N = 3SE +/- 4395.83, N = 3SE +/- 8723.09, N = 3SE +/- 3066.42, N = 32107608.223337126.746558503.533767497.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.1M2.2M3.3M4.4M5.5MMin: 2107014.45 / Avg: 2107608.22 / Max: 2108647.37Min: 3330918.31 / Avg: 3337126.74 / Max: 3345621.82Min: 6547201.58 / Avg: 6558503.53 / Max: 6575664.35Min: 3761365.16 / Avg: 3767497.65 / Max: 3770620.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressDefault - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KSE +/- 93.45, N = 3SE +/- 82.80, N = 3SE +/- 5.67, N = 3SE +/- 92.71, N = 37987.298039.918179.857966.421. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressDefault - IBRSretbleed=stuffmitigations=offretbleed=off14002800420056007000Min: 7816.08 / Avg: 7987.29 / Max: 8137.81Min: 7908.02 / Avg: 8039.91 / Max: 8192.57Min: 8170.78 / Avg: 8179.85 / Max: 8190.27Min: 7809.4 / Avg: 7966.42 / Max: 8130.341. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265SE +/- 0.13, N = 3SE +/- 0.08, N = 3SE +/- 0.27, N = 3SE +/- 0.22, N = 355.5555.8655.8255.731. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingDefault - IBRSretbleed=stuffmitigations=offretbleed=off1122334455Min: 55.3 / Avg: 55.55 / Max: 55.73Min: 55.7 / Avg: 55.86 / Max: 55.97Min: 55.3 / Avg: 55.82 / Max: 56.23Min: 55.3 / Avg: 55.73 / Max: 55.971. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexDefault - IBRSretbleed=stuffmitigations=offretbleed=off300K600K900K1200K1500KSE +/- 7928.09, N = 3SE +/- 1267.42, N = 3SE +/- 2035.60, N = 3SE +/- 3071.26, N = 3830087.70971383.121440190.041156275.481. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexDefault - IBRSretbleed=stuffmitigations=offretbleed=off200K400K600K800K1000KMin: 814269.07 / Avg: 830087.7 / Max: 838941.58Min: 969851.09 / Avg: 971383.12 / Max: 973898.05Min: 1436845.51 / Avg: 1440190.04 / Max: 1443872.67Min: 1151930.48 / Avg: 1156275.48 / Max: 1162208.121. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200SE +/- 2.33, N = 3SE +/- 1.55, N = 3SE +/- 2.61, N = 3SE +/- 2.47, N = 3158.46152.96157.33153.841. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 153.83 / Avg: 158.46 / Max: 161.2Min: 150.39 / Avg: 152.96 / Max: 155.76Min: 153.97 / Avg: 157.33 / Max: 162.46Min: 150.73 / Avg: 153.84 / Max: 158.731. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEDefault - IBRSretbleed=stuffmitigations=offretbleed=off16K32K48K64K80KSE +/- 28.85, N = 3SE +/- 7.12, N = 3SE +/- 11.51, N = 3SE +/- 20.22, N = 354816.6452701.6676342.1858646.531. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEDefault - IBRSretbleed=stuffmitigations=offretbleed=off13K26K39K52K65KMin: 54786.22 / Avg: 54816.64 / Max: 54874.31Min: 52692.84 / Avg: 52701.66 / Max: 52715.74Min: 76322.13 / Avg: 76342.18 / Max: 76362Min: 58606.81 / Avg: 58646.53 / Max: 58672.931. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoDefault - IBRSretbleed=stuffmitigations=offretbleed=off10002000300040005000SE +/- 3.82, N = 3SE +/- 0.75, N = 3SE +/- 6.94, N = 3SE +/- 3.32, N = 34815.304817.254816.674812.471. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoDefault - IBRSretbleed=stuffmitigations=offretbleed=off8001600240032004000Min: 4807.74 / Avg: 4815.3 / Max: 4820.04Min: 4815.92 / Avg: 4817.25 / Max: 4818.53Min: 4802.99 / Avg: 4816.67 / Max: 4825.51Min: 4807.3 / Avg: 4812.47 / Max: 4818.661. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathDefault - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 3.47, N = 3SE +/- 8.67, N = 3SE +/- 1.97, N = 3SE +/- 25.94, N = 314401.8514404.7314411.6014384.511. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathDefault - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 14397.45 / Avg: 14401.85 / Max: 14408.69Min: 14387.9 / Avg: 14404.73 / Max: 14416.76Min: 14408.25 / Avg: 14411.6 / Max: 14415.08Min: 14332.76 / Avg: 14384.51 / Max: 14413.561. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexDefault - IBRSretbleed=stuffmitigations=offretbleed=off500K1000K1500K2000K2500KSE +/- 1727.54, N = 3SE +/- 2417.49, N = 3SE +/- 3583.54, N = 3SE +/- 511.77, N = 3745688.971111966.012131852.031285481.531. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexDefault - IBRSretbleed=stuffmitigations=offretbleed=off400K800K1200K1600K2000KMin: 742377.25 / Avg: 745688.97 / Max: 748197.78Min: 1107906.79 / Avg: 1111966.01 / Max: 1116270.43Min: 2124840.05 / Avg: 2131852.03 / Max: 2136642.3Min: 1284775.81 / Avg: 1285481.53 / Max: 1286476.421. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 37.41, N = 3SE +/- 44.23, N = 3SE +/- 42.84, N = 3SE +/- 44.04, N = 313111.8613778.4617166.2215910.081. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingDefault - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 13064.94 / Avg: 13111.86 / Max: 13185.8Min: 13711.88 / Avg: 13778.46 / Max: 13862.19Min: 17106.54 / Avg: 17166.22 / Max: 17249.31Min: 15853.85 / Avg: 15910.08 / Max: 15996.911. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsDefault - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KSE +/- 40.00, N = 3SE +/- 16.67, N = 3SE +/- 15.28, N = 3105501059010577105801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsDefault - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 10550 / Avg: 10590 / Max: 10670Min: 10560 / Avg: 10576.67 / Max: 10610Min: 10560 / Avg: 10580 / Max: 106101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KSE +/- 3.33, N = 3SE +/- 313.39, N = 3SE +/- 23.33, N = 3SE +/- 3.33, N = 3266972721326723266971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off5K10K15K20K25KMin: 26690 / Avg: 26696.67 / Max: 26700Min: 26890 / Avg: 27213.33 / Max: 27840Min: 26700 / Avg: 26723.33 / Max: 26770Min: 26690 / Avg: 26696.67 / Max: 267001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptDefault - IBRSretbleed=stuffmitigations=offretbleed=off1632486480SE +/- 0.01, N = 3SE +/- 0.16, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 372.5572.7172.6472.611. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptDefault - IBRSretbleed=stuffmitigations=offretbleed=off1428425670Min: 72.53 / Avg: 72.55 / Max: 72.57Min: 72.52 / Avg: 72.71 / Max: 73.03Min: 72.64 / Avg: 72.64 / Max: 72.65Min: 72.57 / Avg: 72.61 / Max: 72.671. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000SE +/- 2.76, N = 3SE +/- 6.67, N = 3SE +/- 2.68, N = 3SE +/- 1.78, N = 31140.681153.431140.861144.591. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off2004006008001000Min: 1135.59 / Avg: 1140.68 / Max: 1145.08Min: 1145.13 / Avg: 1153.43 / Max: 1166.63Min: 1135.54 / Avg: 1140.86 / Max: 1144.05Min: 1141.39 / Avg: 1144.59 / Max: 1147.531. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 1.00, N = 3SE +/- 1.84, N = 3SE +/- 1.22, N = 3SE +/- 1.01, N = 3156.44157.79156.26157.181. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 154.46 / Avg: 156.44 / Max: 157.64Min: 154.23 / Avg: 157.79 / Max: 160.39Min: 153.94 / Avg: 156.26 / Max: 158.09Min: 155.91 / Avg: 157.18 / Max: 159.181. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlDefault - IBRSretbleed=stuffmitigations=offretbleed=off12002400360048006000SE +/- 2.14, N = 3SE +/- 24.39, N = 3SE +/- 17.24, N = 3SE +/- 2.34, N = 35371.745400.625411.585374.931. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlDefault - IBRSretbleed=stuffmitigations=offretbleed=off9001800270036004500Min: 5368.89 / Avg: 5371.74 / Max: 5375.93Min: 5375.68 / Avg: 5400.62 / Max: 5449.4Min: 5377.15 / Avg: 5411.58 / Max: 5430.36Min: 5372.09 / Avg: 5374.93 / Max: 5379.581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off8001600240032004000SE +/- 0.76, N = 3SE +/- 0.36, N = 3SE +/- 24.38, N = 3SE +/- 3.29, N = 33840.303840.873858.753845.261. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off7001400210028003500Min: 3838.86 / Avg: 3840.3 / Max: 3841.42Min: 3840.28 / Avg: 3840.87 / Max: 3841.53Min: 3827 / Avg: 3858.75 / Max: 3906.67Min: 3841.79 / Avg: 3845.26 / Max: 3851.831. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 3.33, N = 3SE +/- 3.33, N = 3SE +/- 3.33, N = 3163601634716353163531. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinDefault - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KMin: 16340 / Avg: 16346.67 / Max: 16350Min: 16350 / Avg: 16353.33 / Max: 16360Min: 16350 / Avg: 16353.33 / Max: 163601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SDefault - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KSE +/- 1465.07, N = 3SE +/- 1476.97, N = 3SE +/- 1213.35, N = 3SE +/- 18.56, N = 31550601559001550631537931. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SDefault - IBRSretbleed=stuffmitigations=offretbleed=off30K60K90K120K150KMin: 153570 / Avg: 155060 / Max: 157990Min: 153830 / Avg: 155900 / Max: 158760Min: 153840 / Avg: 155063.33 / Max: 157490Min: 153770 / Avg: 153793.33 / Max: 1538301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 1 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.004, N = 3SE +/- 0.158, N = 15SE +/- 0.077, N = 3SE +/- 0.161, N = 1513.04510.5777.9558.9701. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 1 - Type: ThreadDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 13.04 / Avg: 13.05 / Max: 13.05Min: 10 / Avg: 10.58 / Max: 11.77Min: 7.85 / Avg: 7.96 / Max: 8.11Min: 8.48 / Avg: 8.97 / Max: 10.141. (CC) gcc options: -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 2 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.52, N = 15SE +/- 0.03, N = 325.0519.1611.3616.131. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 2 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 25 / Avg: 25.05 / Max: 25.13Min: 19.14 / Avg: 19.16 / Max: 19.21Min: 10.55 / Avg: 11.36 / Max: 16.66Min: 16.08 / Avg: 16.13 / Max: 16.181. (CC) gcc options: -lpthread

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.00, N = 3SE +/- 0.29, N = 12SE +/- 0.03, N = 15SE +/- 0.00, N = 334.0134.3034.0334.011. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835Min: 34 / Avg: 34.01 / Max: 34.01Min: 34 / Avg: 34.3 / Max: 37.47Min: 34.01 / Avg: 34.03 / Max: 34.41Min: 34.01 / Avg: 34.01 / Max: 34.011. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceDefault - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 141.89, N = 3SE +/- 155.43, N = 12SE +/- 138.84, N = 15SE +/- 56.28, N = 312505.3812566.6612500.3012753.071. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceDefault - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 12328.18 / Avg: 12505.38 / Max: 12785.94Min: 10903.31 / Avg: 12566.66 / Max: 12835.38Min: 11106.48 / Avg: 12500.3 / Max: 12866.62Min: 12641.02 / Avg: 12753.07 / Max: 12818.311. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.02, N = 3SE +/- 0.15, N = 328.5328.0927.5727.851. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 28.48 / Avg: 28.53 / Max: 28.56Min: 27.85 / Avg: 28.09 / Max: 28.29Min: 27.55 / Avg: 27.57 / Max: 27.6Min: 27.68 / Avg: 27.85 / Max: 28.161. (CXX) g++ options: -O3 -fPIC -lm

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50Default - IBRSretbleed=stuffmitigations=offretbleed=off500K1000K1500K2000K2500KSE +/- 9932.25, N = 3SE +/- 10945.95, N = 3SE +/- 8978.75, N = 3SE +/- 13282.97, N = 32315119.832289592.502321265.752323685.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50Default - IBRSretbleed=stuffmitigations=offretbleed=off400K800K1200K1600K2000KMin: 2304401.25 / Avg: 2315119.83 / Max: 2334963Min: 2274939.5 / Avg: 2289592.5 / Max: 2311004.75Min: 2312287 / Avg: 2321265.75 / Max: 2339223.25Min: 2300901.75 / Avg: 2323685.5 / Max: 23469091. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash128Default - IBRSretbleed=stuffmitigations=offretbleed=off1224364860SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 1551.4151.4151.4151.411. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash128Default - IBRSretbleed=stuffmitigations=offretbleed=off1020304050Min: 51.41 / Avg: 51.41 / Max: 51.41Min: 51.41 / Avg: 51.41 / Max: 51.41Min: 51.41 / Avg: 51.41 / Max: 51.41Min: 51.4 / Avg: 51.41 / Max: 51.431. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 37.86, N = 3SE +/- 6.91, N = 3SE +/- 29.02, N = 3SE +/- 141.54, N = 1513059.2413103.2813107.9112611.701. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128Default - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 12983.87 / Avg: 13059.24 / Max: 13103.27Min: 13089.82 / Avg: 13103.28 / Max: 13112.69Min: 13065.55 / Avg: 13107.91 / Max: 13163.46Min: 11658.09 / Avg: 12611.7 / Max: 13144.011. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

Selenium

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off130260390520650SE +/- 0.46, N = 3SE +/- 1.11, N = 3SE +/- 0.38, N = 3SE +/- 5.12, N = 3600.1596.8597.0604.21. firefox 108.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off110220330440550Min: 599.3 / Avg: 600.1 / Max: 600.9Min: 594.6 / Avg: 596.8 / Max: 598.1Min: 596.4 / Avg: 596.97 / Max: 597.7Min: 594.1 / Avg: 604.23 / Max: 610.61. firefox 108.0

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 6Default - IBRSretbleed=stuffmitigations=offretbleed=off0.55081.10161.65242.20322.754SE +/- 0.006, N = 3SE +/- 0.006, N = 3SE +/- 0.006, N = 3SE +/- 0.005, N = 32.4262.4482.4482.435
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 6Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 2.42 / Avg: 2.43 / Max: 2.44Min: 2.44 / Avg: 2.45 / Max: 2.46Min: 2.44 / Avg: 2.45 / Max: 2.46Min: 2.42 / Avg: 2.43 / Max: 2.44

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off1224364860SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 353.053.153.052.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off1122334455Min: 52.9 / Avg: 53 / Max: 53.1Min: 53 / Avg: 53.07 / Max: 53.1Min: 52.9 / Avg: 53 / Max: 53.1Min: 52.8 / Avg: 52.9 / Max: 531. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.20, N = 3SE +/- 0.65, N = 3110.0110.4110.4109.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 109.8 / Avg: 110 / Max: 110.2Min: 110.3 / Avg: 110.37 / Max: 110.5Min: 110.1 / Avg: 110.43 / Max: 110.8Min: 108.2 / Avg: 109.5 / Max: 110.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXDefault - IBRSretbleed=stuffmitigations=offretbleed=off1020304050SE +/- 2.16, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.56, N = 1544.5842.4442.4343.301. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXDefault - IBRSretbleed=stuffmitigations=offretbleed=off918273645Min: 42.41 / Avg: 44.58 / Max: 48.91Min: 42.43 / Avg: 42.44 / Max: 42.44Min: 42.43 / Avg: 42.43 / Max: 42.44Min: 42.41 / Avg: 43.3 / Max: 50.241. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 91.06, N = 3SE +/- 49.18, N = 3SE +/- 68.95, N = 3SE +/- 245.46, N = 1520898.2520869.8820829.8220433.251. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 20716.68 / Avg: 20898.25 / Max: 21001.41Min: 20771.85 / Avg: 20869.88 / Max: 20925.85Min: 20715.19 / Avg: 20829.82 / Max: 20953.51Min: 18144.6 / Avg: 20433.25 / Max: 21156.171. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapDefault - IBRSretbleed=stuffmitigations=offretbleed=off15003000450060007500SE +/- 50.45, N = 3SE +/- 38.70, N = 4SE +/- 49.58, N = 4SE +/- 90.71, N = 47114702867286962
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapDefault - IBRSretbleed=stuffmitigations=offretbleed=off12002400360048006000Min: 7035 / Avg: 7114.33 / Max: 7208Min: 6913 / Avg: 7028 / Max: 7079Min: 6608 / Avg: 6727.5 / Max: 6845Min: 6729 / Avg: 6961.5 / Max: 7162

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.47, N = 3SE +/- 0.18, N = 3SE +/- 0.20, N = 3SE +/- 0.77, N = 3137.2137.1136.7137.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150Min: 136.7 / Avg: 137.17 / Max: 138.1Min: 136.8 / Avg: 137.13 / Max: 137.4Min: 136.4 / Avg: 136.73 / Max: 137.1Min: 135.7 / Avg: 137.17 / Max: 138.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off80160240320400SE +/- 0.38, N = 3SE +/- 0.64, N = 3SE +/- 0.43, N = 3SE +/- 0.64, N = 3359.6359.0359.7359.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300Min: 359 / Avg: 359.63 / Max: 360.3Min: 357.9 / Avg: 359 / Max: 360.1Min: 358.9 / Avg: 359.73 / Max: 360.3Min: 359.2 / Avg: 359.93 / Max: 361.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50Default - IBRSretbleed=stuffmitigations=offretbleed=off700K1400K2100K2800K3500KSE +/- 1752.83, N = 3SE +/- 623.36, N = 3SE +/- 5141.59, N = 3SE +/- 1509.63, N = 33367879.173089573.833171723.173096924.801. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50Default - IBRSretbleed=stuffmitigations=offretbleed=off600K1200K1800K2400K3000KMin: 3365930.5 / Avg: 3367879.17 / Max: 3371377.25Min: 3088332 / Avg: 3089573.83 / Max: 3090290.25Min: 3163412.5 / Avg: 3171723.17 / Max: 3181123.25Min: 3093923.5 / Avg: 3096924.83 / Max: 30987101. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 320.2120.2319.9720.16
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025Min: 20.16 / Avg: 20.21 / Max: 20.26Min: 20.12 / Avg: 20.23 / Max: 20.42Min: 19.9 / Avg: 19.97 / Max: 20.05Min: 20.08 / Avg: 20.16 / Max: 20.25

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Default - IBRSretbleed=stuffmitigations=offretbleed=off510152025SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 319.9419.7619.3619.491. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Default - IBRSretbleed=stuffmitigations=offretbleed=off510152025Min: 19.81 / Avg: 19.94 / Max: 20.01Min: 19.63 / Avg: 19.76 / Max: 19.92Min: 19.22 / Avg: 19.36 / Max: 19.45Min: 19.36 / Avg: 19.49 / Max: 19.641. (CXX) g++ options: -O3 -fPIC -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off306090120150SE +/- 0.17, N = 3SE +/- 0.24, N = 3SE +/- 0.25, N = 3SE +/- 0.57, N = 3130.9130.9130.8129.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 130.7 / Avg: 130.87 / Max: 131.2Min: 130.4 / Avg: 130.87 / Max: 131.2Min: 130.3 / Avg: 130.8 / Max: 131.1Min: 128.3 / Avg: 129.43 / Max: 130.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off70140210280350SE +/- 0.85, N = 3SE +/- 0.31, N = 3SE +/- 0.26, N = 3SE +/- 0.39, N = 3328.6329.2328.5327.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMDefault - IBRSretbleed=stuffmitigations=offretbleed=off60120180240300Min: 326.9 / Avg: 328.57 / Max: 329.7Min: 328.6 / Avg: 329.2 / Max: 329.6Min: 328.1 / Avg: 328.5 / Max: 329Min: 326.3 / Avg: 327.07 / Max: 327.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -lpthread -ldl -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.29480.58960.88441.17921.474SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.301.311.311.311. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 1.3 / Avg: 1.3 / Max: 1.3Min: 1.3 / Avg: 1.31 / Max: 1.32Min: 1.3 / Avg: 1.31 / Max: 1.31Min: 1.31 / Avg: 1.31 / Max: 1.321. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Sequential FillDefault - IBRSretbleed=stuffmitigations=offretbleed=off150K300K450K600K750KSE +/- 5351.57, N = 3SE +/- 6126.72, N = 6SE +/- 8478.51, N = 3SE +/- 1867.35, N = 35090515201666879326131551. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Sequential FillDefault - IBRSretbleed=stuffmitigations=offretbleed=off120K240K360K480K600KMin: 501462 / Avg: 509051.33 / Max: 519382Min: 492527 / Avg: 520166 / Max: 535144Min: 677325 / Avg: 687932 / Max: 704693Min: 610096 / Avg: 613155 / Max: 6165401. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: wyhashDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.00, N = 3SE +/- 0.22, N = 15SE +/- 0.01, N = 3SE +/- 0.26, N = 323.8824.1023.8924.151. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: wyhashDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 23.88 / Avg: 23.88 / Max: 23.89Min: 23.88 / Avg: 24.1 / Max: 27.15Min: 23.89 / Avg: 23.89 / Max: 23.91Min: 23.88 / Avg: 24.15 / Max: 24.671. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KSE +/- 227.26, N = 3SE +/- 210.32, N = 15SE +/- 200.08, N = 3SE +/- 53.25, N = 320542.6620125.3820425.0420525.161. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashDefault - IBRSretbleed=stuffmitigations=offretbleed=off4K8K12K16K20KMin: 20109.89 / Avg: 20542.66 / Max: 20879.35Min: 18170.8 / Avg: 20125.38 / Max: 21024.7Min: 20041.87 / Avg: 20425.04 / Max: 20716.52Min: 20435.47 / Avg: 20525.16 / Max: 20619.741. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.12, N = 336.0236.2636.3636.341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240Min: 35.81 / Avg: 36.02 / Max: 36.2Min: 36.08 / Avg: 36.26 / Max: 36.42Min: 36.19 / Avg: 36.36 / Max: 36.5Min: 36.11 / Avg: 36.34 / Max: 36.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 10Default - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.020, N = 3SE +/- 0.006, N = 3SE +/- 0.017, N = 3SE +/- 0.026, N = 36.0736.1446.1826.182
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 10Default - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 6.04 / Avg: 6.07 / Max: 6.11Min: 6.13 / Avg: 6.14 / Max: 6.15Min: 6.15 / Avg: 6.18 / Max: 6.21Min: 6.14 / Avg: 6.18 / Max: 6.22

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 1 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.011, N = 3SE +/- 0.012, N = 3SE +/- 0.260, N = 15SE +/- 0.004, N = 312.8069.8766.4978.3031. (CC) gcc options: -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 1 - Type: ProcessDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 12.79 / Avg: 12.81 / Max: 12.82Min: 9.86 / Avg: 9.88 / Max: 9.9Min: 5.48 / Avg: 6.5 / Max: 7.77Min: 8.3 / Avg: 8.3 / Max: 8.311. (CC) gcc options: -lpthread

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIDefault - IBRSretbleed=stuffmitigations=offretbleed=off1428425670SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 364.5164.4864.4964.491. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIDefault - IBRSretbleed=stuffmitigations=offretbleed=off1326395265Min: 64.47 / Avg: 64.51 / Max: 64.58Min: 64.47 / Avg: 64.48 / Max: 64.49Min: 64.48 / Avg: 64.49 / Max: 64.5Min: 64.46 / Avg: 64.49 / Max: 64.51. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIDefault - IBRSretbleed=stuffmitigations=offretbleed=off7K14K21K28K35KSE +/- 91.91, N = 3SE +/- 20.33, N = 3SE +/- 137.62, N = 3SE +/- 31.93, N = 332896.5132807.2032954.0432887.581. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIDefault - IBRSretbleed=stuffmitigations=offretbleed=off6K12K18K24K30KMin: 32755.78 / Avg: 32896.51 / Max: 33069.29Min: 32766.93 / Avg: 32807.2 / Max: 32832.24Min: 32728.33 / Avg: 32954.04 / Max: 33203.31Min: 32831.45 / Avg: 32887.58 / Max: 32942.021. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810SE +/- 0.00789, N = 3SE +/- 0.00849, N = 3SE +/- 0.01398, N = 3SE +/- 0.01002, N = 37.686447.696147.693507.68653MIN: 7.53MIN: 7.55MIN: 7.53MIN: 7.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 7.67 / Avg: 7.69 / Max: 7.7Min: 7.68 / Avg: 7.7 / Max: 7.71Min: 7.68 / Avg: 7.69 / Max: 7.72Min: 7.67 / Avg: 7.69 / Max: 7.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Sockperf

This is a network socket API performance benchmark developed by Mellanox. This test profile runs both the client and server on the local host for evaluating individual system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.7Test: Latency Ping PongDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.06162.12323.18484.24645.308SE +/- 0.045, N = 5SE +/- 0.051, N = 5SE +/- 0.004, N = 5SE +/- 0.009, N = 54.7184.2702.9203.6111. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.7Test: Latency Ping PongDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 4.62 / Avg: 4.72 / Max: 4.86Min: 4.2 / Avg: 4.27 / Max: 4.47Min: 2.91 / Avg: 2.92 / Max: 2.93Min: 3.59 / Avg: 3.61 / Max: 3.641. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KDefault - IBRSretbleed=stuffmitigations=offretbleed=off1122334455SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 348.2748.7649.6349.051. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KDefault - IBRSretbleed=stuffmitigations=offretbleed=off1020304050Min: 48.1 / Avg: 48.27 / Max: 48.44Min: 48.57 / Avg: 48.76 / Max: 48.86Min: 49.44 / Avg: 49.63 / Max: 49.78Min: 48.8 / Avg: 49.05 / Max: 49.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: Spooky32Default - IBRSretbleed=stuffmitigations=offretbleed=off1122334455SE +/- 0.00, N = 3SE +/- 0.25, N = 3SE +/- 0.24, N = 3SE +/- 0.01, N = 350.0350.2750.2750.031. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: Spooky32Default - IBRSretbleed=stuffmitigations=offretbleed=off1020304050Min: 50.02 / Avg: 50.02 / Max: 50.03Min: 50.01 / Avg: 50.27 / Max: 50.77Min: 50.02 / Avg: 50.27 / Max: 50.74Min: 50.02 / Avg: 50.03 / Max: 50.041. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32Default - IBRSretbleed=stuffmitigations=offretbleed=off3K6K9K12K15KSE +/- 10.25, N = 3SE +/- 9.32, N = 3SE +/- 38.37, N = 3SE +/- 105.88, N = 312205.3612211.4812101.0512124.071. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32Default - IBRSretbleed=stuffmitigations=offretbleed=off2K4K6K8K10KMin: 12190.85 / Avg: 12205.36 / Max: 12225.15Min: 12193.84 / Avg: 12211.48 / Max: 12225.53Min: 12039.85 / Avg: 12101.05 / Max: 12171.76Min: 11916.47 / Avg: 12124.07 / Max: 12263.991. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.017, N = 4SE +/- 0.067, N = 4SE +/- 0.005, N = 4SE +/- 0.004, N = 49.6549.6419.3319.472
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 9.62 / Avg: 9.65 / Max: 9.7Min: 9.54 / Avg: 9.64 / Max: 9.84Min: 9.32 / Avg: 9.33 / Max: 9.34Min: 9.46 / Avg: 9.47 / Max: 9.48

Selenium

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.10, N = 3SE +/- 0.17, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 333.233.133.233.41. firefox 108.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835Min: 33.1 / Avg: 33.2 / Max: 33.4Min: 32.8 / Avg: 33.13 / Max: 33.3Min: 33.1 / Avg: 33.23 / Max: 33.4Min: 33.2 / Avg: 33.37 / Max: 33.51. firefox 108.0

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KDefault - IBRSretbleed=stuffmitigations=offretbleed=off1224364860SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.08, N = 351.1251.9352.0652.171. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KDefault - IBRSretbleed=stuffmitigations=offretbleed=off1020304050Min: 51 / Avg: 51.12 / Max: 51.22Min: 51.73 / Avg: 51.93 / Max: 52.14Min: 51.79 / Avg: 52.06 / Max: 52.23Min: 52.03 / Avg: 52.17 / Max: 52.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off1.24052.4813.72154.9626.2025SE +/- 0.01808, N = 3SE +/- 0.00519, N = 3SE +/- 0.01647, N = 3SE +/- 0.00478, N = 35.513325.483385.495795.46550MIN: 5.41MIN: 5.41MIN: 5.41MIN: 5.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 5.49 / Avg: 5.51 / Max: 5.55Min: 5.47 / Avg: 5.48 / Max: 5.49Min: 5.48 / Avg: 5.5 / Max: 5.53Min: 5.46 / Avg: 5.47 / Max: 5.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultDefault - IBRSretbleed=stuffmitigations=offretbleed=off91827364537.5436.7135.9436.56

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 312.4311.8511.6211.741. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 12.42 / Avg: 12.43 / Max: 12.44Min: 11.81 / Avg: 11.85 / Max: 11.89Min: 11.46 / Avg: 11.62 / Max: 11.81Min: 11.72 / Avg: 11.74 / Max: 11.781. (CXX) g++ options: -O3 -fPIC -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: fasthash32Default - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.39, N = 3SE +/- 0.00, N = 336.3936.3936.8036.391. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: fasthash32Default - IBRSretbleed=stuffmitigations=offretbleed=off816243240Min: 36.39 / Avg: 36.39 / Max: 36.39Min: 36.38 / Avg: 36.39 / Max: 36.4Min: 36.4 / Avg: 36.8 / Max: 37.59Min: 36.39 / Avg: 36.39 / Max: 36.391. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32Default - IBRSretbleed=stuffmitigations=offretbleed=off13002600390052006500SE +/- 1.21, N = 3SE +/- 2.06, N = 3SE +/- 1.39, N = 3SE +/- 1.00, N = 36164.116164.446165.806164.711. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32Default - IBRSretbleed=stuffmitigations=offretbleed=off11002200330044005500Min: 6162.01 / Avg: 6164.11 / Max: 6166.19Min: 6160.47 / Avg: 6164.44 / Max: 6167.39Min: 6163.09 / Avg: 6165.8 / Max: 6167.71Min: 6163.24 / Avg: 6164.71 / Max: 6166.621. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64Default - IBRSretbleed=stuffmitigations=offretbleed=off816243240SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.82, N = 333.7133.7033.6934.521. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64Default - IBRSretbleed=stuffmitigations=offretbleed=off714212835Min: 33.71 / Avg: 33.71 / Max: 33.72Min: 33.69 / Avg: 33.7 / Max: 33.71Min: 33.68 / Avg: 33.69 / Max: 33.71Min: 33.69 / Avg: 34.52 / Max: 36.151. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64Default - IBRSretbleed=stuffmitigations=offretbleed=off9K18K27K36K45KSE +/- 146.26, N = 3SE +/- 329.62, N = 3SE +/- 285.30, N = 3SE +/- 312.62, N = 339760.7938438.7739559.8139619.031. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64Default - IBRSretbleed=stuffmitigations=offretbleed=off7K14K21K28K35KMin: 39469.84 / Avg: 39760.79 / Max: 39932.46Min: 37805.97 / Avg: 38438.77 / Max: 38915.26Min: 39178.79 / Avg: 39559.81 / Max: 40118.17Min: 39083.84 / Avg: 39619.03 / Max: 40166.571. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonDefault - IBRSretbleed=stuffmitigations=offretbleed=off10002000300040005000SE +/- 50.46, N = 7SE +/- 22.52, N = 4SE +/- 51.90, N = 5SE +/- 23.53, N = 44452444143654234
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonDefault - IBRSretbleed=stuffmitigations=offretbleed=off8001600240032004000Min: 4280 / Avg: 4452.29 / Max: 4668Min: 4384 / Avg: 4441 / Max: 4480Min: 4253 / Avg: 4365.2 / Max: 4541Min: 4198 / Avg: 4234 / Max: 4303

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 312.3712.2512.3412.33MIN: 12.21MIN: 12.02MIN: 12.15MIN: 12.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 12.36 / Avg: 12.37 / Max: 12.38Min: 12.24 / Avg: 12.25 / Max: 12.28Min: 12.32 / Avg: 12.34 / Max: 12.36Min: 12.32 / Avg: 12.33 / Max: 12.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.6121.2241.8362.4483.06SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.682.712.722.711. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread
OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 2.68 / Avg: 2.68 / Max: 2.7Min: 2.7 / Avg: 2.71 / Max: 2.71Min: 2.72 / Avg: 2.72 / Max: 2.73Min: 2.71 / Avg: 2.71 / Max: 2.721. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionDefault - IBRSretbleed=stuffmitigations=offretbleed=off0.69531.39062.08592.78123.4765SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.073.093.093.081. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionDefault - IBRSretbleed=stuffmitigations=offretbleed=off246810Min: 3.06 / Avg: 3.07 / Max: 3.08Min: 3.09 / Avg: 3.09 / Max: 3.09Min: 3.08 / Avg: 3.09 / Max: 3.1Min: 3.05 / Avg: 3.08 / Max: 3.091. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 320.9220.9220.9320.91MIN: 20.86MIN: 20.86MIN: 20.84MIN: 20.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off510152025Min: 20.92 / Avg: 20.92 / Max: 20.92Min: 20.91 / Avg: 20.92 / Max: 20.92Min: 20.91 / Avg: 20.93 / Max: 20.96Min: 20.91 / Avg: 20.91 / Max: 20.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create FilesDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 329.6726.5018.1520.921. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create FilesDefault - IBRSretbleed=stuffmitigations=offretbleed=off714212835Min: 29.65 / Avg: 29.67 / Max: 29.72Min: 26.49 / Avg: 26.5 / Max: 26.51Min: 18.03 / Avg: 18.15 / Max: 18.34Min: 20.85 / Avg: 20.92 / Max: 20.971. (CC) gcc options: -lm

OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory AllocationsDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.20, N = 3SE +/- 0.01, N = 3SE +/- 0.16, N = 3SE +/- 0.02, N = 3107.7293.2083.1989.001. (CC) gcc options: -lm
OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory AllocationsDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 107.49 / Avg: 107.72 / Max: 108.12Min: 93.18 / Avg: 93.2 / Max: 93.22Min: 82.99 / Avg: 83.19 / Max: 83.51Min: 88.98 / Avg: 89 / Max: 89.041. (CC) gcc options: -lm

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch ProgramsDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100SE +/- 0.17, N = 3SE +/- 0.21, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 393.3285.8267.2574.871. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch ProgramsDefault - IBRSretbleed=stuffmitigations=offretbleed=off20406080100Min: 93.1 / Avg: 93.32 / Max: 93.65Min: 85.51 / Avg: 85.82 / Max: 86.22Min: 67.12 / Avg: 67.25 / Max: 67.5Min: 74.68 / Avg: 74.87 / Max: 75.131. (CC) gcc options: -lm

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesDefault - IBRSretbleed=stuffmitigations=offretbleed=off1122334455SE +/- 0.15, N = 3SE +/- 0.11, N = 3SE +/- 0.38, N = 3SE +/- 0.51, N = 347.0543.4234.5138.241. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesDefault - IBRSretbleed=stuffmitigations=offretbleed=off1020304050Min: 46.84 / Avg: 47.05 / Max: 47.33Min: 43.26 / Avg: 43.42 / Max: 43.62Min: 33.97 / Avg: 34.51 / Max: 35.24Min: 37.53 / Avg: 38.24 / Max: 39.231. (CC) gcc options: -lm

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ThreadsDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430SE +/- 0.23, N = 3SE +/- 0.28, N = 3SE +/- 0.11, N = 3SE +/- 0.26, N = 325.2823.7818.1719.781. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ThreadsDefault - IBRSretbleed=stuffmitigations=offretbleed=off612182430Min: 24.89 / Avg: 25.28 / Max: 25.68Min: 23.22 / Avg: 23.78 / Max: 24.07Min: 18.02 / Avg: 18.17 / Max: 18.38Min: 19.38 / Avg: 19.78 / Max: 20.271. (CC) gcc options: -lm

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off6001200180024003000SE +/- 3.33, N = 3SE +/- 0.67, N = 3282311942301181
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeDefault - IBRSretbleed=stuffmitigations=offretbleed=off5001000150020002500Min: 2816 / Avg: 2822.67 / Max: 2826Min: 1180 / Avg: 1180.67 / Max: 1182

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200SE +/- 0.31, N = 3SE +/- 0.43, N = 3SE +/- 0.45, N = 3SE +/- 0.54, N = 3185.80189.07191.99190.251. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200Min: 185.37 / Avg: 185.8 / Max: 186.4Min: 188.22 / Avg: 189.07 / Max: 189.61Min: 191.52 / Avg: 191.99 / Max: 192.89Min: 189.17 / Avg: 190.25 / Max: 190.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pDefault - IBRSretbleed=stuffmitigations=offretbleed=off50100150200250SE +/- 0.29, N = 3SE +/- 0.24, N = 3SE +/- 0.43, N = 3SE +/- 0.92, N = 3207.79212.12216.17212.541. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pDefault - IBRSretbleed=stuffmitigations=offretbleed=off4080120160200Min: 207.4 / Avg: 207.79 / Max: 208.35Min: 211.66 / Avg: 212.12 / Max: 212.46Min: 215.41 / Avg: 216.17 / Max: 216.9Min: 211.17 / Avg: 212.54 / Max: 214.291. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 314.4414.4814.5014.50MIN: 14.25MIN: 14.33MIN: 14.35MIN: 14.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 14.38 / Avg: 14.44 / Max: 14.48Min: 14.45 / Avg: 14.48 / Max: 14.53Min: 14.49 / Avg: 14.5 / Max: 14.52Min: 14.49 / Avg: 14.5 / Max: 14.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Default - IBRSretbleed=stuffmitigations=offretbleed=off3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 39.039.099.109.121. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Default - IBRSretbleed=stuffmitigations=offretbleed=off3691215Min: 9.01 / Avg: 9.03 / Max: 9.05Min: 9.09 / Avg: 9.09 / Max: 9.1Min: 9.06 / Avg: 9.1 / Max: 9.13Min: 9.11 / Avg: 9.12 / Max: 9.131. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 314.1614.2114.2014.251. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultDefault - IBRSretbleed=stuffmitigations=offretbleed=off48121620Min: 14.12 / Avg: 14.16 / Max: 14.18Min: 14.19 / Avg: 14.21 / Max: 14.23Min: 14.17 / Avg: 14.2 / Max: 14.23Min: 14.22 / Avg: 14.25 / Max: 14.31. (CC) gcc options: -fvisibility=hidden -O2 -lm -pthread

360 Results Shown

Timed Node.js Compilation
OpenRadioss
nekRS
WebP2 Image Encode
MariaDB:
  16
  8
Renaissance
OpenRadioss
Xmrig
SMHasher:
  SHA3-256:
    cycles/hash
    MiB/sec
OpenRadioss
Apache Spark:
  1000000 - 500 - Broadcast Inner Join Test Time
  1000000 - 500 - Inner Join Test Time
  1000000 - 500 - Repartition Test Time
  1000000 - 500 - Group By Test Time
  1000000 - 500 - Calculate Pi Benchmark Using Dataframe
  1000000 - 500 - Calculate Pi Benchmark
  1000000 - 500 - SHA-512 Benchmark Time
WebP2 Image Encode
Xmrig
Apache Spark:
  1000000 - 100 - Broadcast Inner Join Test Time
  1000000 - 100 - Inner Join Test Time
  1000000 - 100 - Repartition Test Time
  1000000 - 100 - Group By Test Time
  1000000 - 100 - Calculate Pi Benchmark Using Dataframe
  1000000 - 100 - Calculate Pi Benchmark
  1000000 - 100 - SHA-512 Benchmark Time
JPEG XL libjxl
Renaissance
JPEG XL libjxl
Numenta Anomaly Benchmark
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
Renaissance
Natron
OpenRadioss
OpenVKL:
  vklBenchmark Scalar
  vklBenchmark ISPC
libavif avifenc
Blender
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
Hackbench
OpenRadioss
Timed Linux Kernel Compilation
Numenta Anomaly Benchmark
Renaissance
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
  mobilenetV3
  nasnet
Renaissance
memtier_benchmark
BRL-CAD
LuaRadio:
  Complex Phase
  Hilbert Transform
  FM Deemphasis Filter
  Five Back to Back FIR Filters
Selenium
NCNN:
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
JPEG XL libjxl
Timed Erlang/OTP Compilation
JPEG XL libjxl
Stargate Digital Audio Workstation
memtier_benchmark
CockroachDB
JPEG XL libjxl
Stargate Digital Audio Workstation
libavif avifenc
JPEG XL libjxl
miniBUDE:
  OpenMP - BM1:
    Billion Interactions/s
    GFInst/s
Hackbench
Stress-NG
SVT-AV1
PostgreSQL:
  100 - 50 - Read Write - Average Latency
  100 - 50 - Read Write
  100 - 1 - Read Only - Average Latency
  100 - 1 - Read Only
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
  100 - 50 - Read Only - Average Latency
  100 - 50 - Read Only
  100 - 1 - Read Write - Average Latency
  100 - 1 - Read Write
Timed CPython Compilation
Stress-NG
Renaissance
Timed PHP Compilation
memtier_benchmark
Stargate Digital Audio Workstation
PostgreSQL:
  1 - 100 - Read Write - Average Latency
  1 - 100 - Read Write
  1 - 50 - Read Write - Average Latency
  1 - 50 - Read Write
  1 - 1 - Read Only - Average Latency
  1 - 1 - Read Only
  1 - 100 - Read Only - Average Latency
  1 - 100 - Read Only
  1 - 1 - Read Write - Average Latency
  1 - 1 - Read Write
  1 - 50 - Read Only - Average Latency
  1 - 50 - Read Only
Stargate Digital Audio Workstation
Renaissance
Stargate Digital Audio Workstation:
  480000 - 1024
  44100 - 1024
Renaissance
CockroachDB:
  KV, 50% Reads - 512
  KV, 60% Reads - 512
  KV, 10% Reads - 512
  KV, 95% Reads - 512
  KV, 10% Reads - 256
  KV, 60% Reads - 256
  KV, 50% Reads - 256
  KV, 95% Reads - 256
  KV, 50% Reads - 128
  KV, 10% Reads - 128
  KV, 60% Reads - 128
  KV, 95% Reads - 128
oneDNN
Hackbench:
  4 - Thread
  2 - Thread
CockroachDB:
  MoVR - 512
  MoVR - 128
KeyDB
Numenta Anomaly Benchmark
nginx:
  4000
  100
  1000
  500
  200
  20
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
Renaissance
Hackbench
JPEG XL Decoding libjxl
Hackbench
Facebook RocksDB
memtier_benchmark:
  Redis - 100 - 10:1
  Redis - 100 - 1:1
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
DaCapo Benchmark
OpenVINO:
  Person Detection FP32 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
Dragonflydb:
  200 - 5:1
  200 - 1:5
  200 - 1:1
memtier_benchmark
Dragonflydb:
  50 - 1:5
  50 - 5:1
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
EnCodec
Memcached:
  1:10
  1:1
  1:5
  5:1
Facebook RocksDB
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
GraphicsMagick
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
GraphicsMagick
Facebook RocksDB
GraphicsMagick:
  Enhanced
  Noise-Gaussian
Facebook RocksDB
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
Facebook RocksDB:
  Update Rand
  Read Rand Write Rand
GraphicsMagick:
  Resizing
  Rotate
  HWB Color Space
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
PostMark
WebP Image Encode
EnCodec:
  6 kbps
  3 kbps
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
EnCodec
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Renaissance
Stress-NG
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 256-QAM:
    UE Mb/s
    eNb Mb/s
Selenium
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
JPEG XL Decoding libjxl
SVT-AV1
srsRAN
Renaissance
SVT-AV1
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 64-QAM:
    UE Mb/s
    eNb Mb/s
Cpuminer-Opt
DaCapo Benchmark
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Sockperf
Stress-NG
Cpuminer-Opt
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
Hackbench
7-Zip Compression:
  Decompression Rating
  Compression Rating
FLAC Audio Encoding
Selenium
Cpuminer-Opt
rav1e
Stress-NG:
  Matrix Math
  x86_64 RdRand
  Context Switching
  NUMA
  Malloc
  Socket Activity
  MEMFD
  Glibc C String Functions
  System V Message Passing
  CPU Stress
  Glibc Qsort Data Sorting
  Futex
  CPU Cache
  SENDFILE
  Crypto
  Vector Math
  Mutex
  Forking
Cpuminer-Opt:
  LBC, LBRY Credits
  Triple SHA-256, Onecoin
  scrypt
  Garlicoin
  x25x
  Myriad-Groestl
  Deepcoin
  Skeincoin
  Blake-2 S
Hackbench:
  1 - Thread
  2 - Process
SMHasher:
  t1ha2_atonce:
    cycles/hash
    MiB/sec
libavif avifenc
Redis
SMHasher:
  FarmHash128:
    cycles/hash
    MiB/sec
Selenium
rav1e
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
SMHasher:
  FarmHash32 x86_64 AVX:
    cycles/hash
    MiB/sec
DaCapo Benchmark
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 256-QAM:
    UE Mb/s
    eNb Mb/s
Redis
Numenta Anomaly Benchmark
libavif avifenc
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
WebP Image Encode
Facebook RocksDB
SMHasher:
  wyhash:
    cycles/hash
    MiB/sec
SVT-AV1
rav1e
Hackbench
SMHasher:
  MeowHash x86_64 AES-NI:
    cycles/hash
    MiB/sec
oneDNN
Sockperf
SVT-AV1
SMHasher:
  Spooky32:
    cycles/hash
    MiB/sec
Unpacking The Linux Kernel
Selenium
SVT-AV1
oneDNN
Timed CPython Compilation
libavif avifenc
SMHasher:
  fasthash32:
    cycles/hash
    MiB/sec
  t1ha0_aes_avx2 x86_64:
    cycles/hash
    MiB/sec
DaCapo Benchmark
oneDNN
WebP2 Image Encode
WebP Image Encode
oneDNN
OSBench:
  Create Files
  Memory Allocations
  Launch Programs
  Create Processes
  Create Threads
ctx_clock
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
oneDNN
WebP Image Encode:
  Quality 100
  Default