3900xt-november

AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211180-SYST-3900XTN38
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 4 Tests
CPU Massive 6 Tests
Creator Workloads 8 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Encoding 4 Tests
HPC - High Performance Computing 8 Tests
Imaging 3 Tests
Machine Learning 4 Tests
Multi-Core 6 Tests
OpenMPI Tests 3 Tests
Python Tests 6 Tests
Server CPU Tests 3 Tests
Single-Threaded 2 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
November 17 2022
  5 Hours, 24 Minutes
aa
November 17 2022
  5 Hours, 21 Minutes
b
November 17 2022
  13 Hours, 42 Minutes
Invert Hiding All Results Option
  8 Hours, 9 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3900xt-november OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads)MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS)AMD Starship/Matisse16GB500GB Seagate FireCuda 520 SSD ZP500GM30002AMD Radeon RX 56/64 8GB (1630/945MHz)AMD Vega 10 HDMI AudioASUS MG28URealtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200Ubuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.42)1.3.204GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution3900xt-november BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021- BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102- Python 3.10.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

aaabResult OverviewPhoronix Test Suite100%106%112%118%spaCyoneDNNStress-NGnekRSEnCodecXmrigJPEG XL Decoding libjxlnginxQuadRayJPEG XL libjxlSMHasherAOM AV1FLAC Audio EncodingCpuminer-Optlibavif avifencY-CruncherOpenRadiossminiBUDENeural Magic DeepSparseOpenFOAMFFmpegLibplaceboTensorFlow

3900xt-november smhasher: wyhashsmhasher: SHA3-256smhasher: Spooky32smhasher: fasthash32smhasher: FarmHash128smhasher: t1ha2_atoncesmhasher: FarmHash32 x86_64 AVXsmhasher: t1ha0_aes_avx2 x86_64smhasher: MeowHash x86_64 AES-NIstress-ng: MMAPstress-ng: NUMAstress-ng: Futexstress-ng: MEMFDstress-ng: Mutexstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Forkingstress-ng: IO_uringstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingencode-flac: WAV To FLACjpegxl-decode: 1jpegxl-decode: Alljpegxl: PNG - 80jpegxl: PNG - 90jpegxl: JPEG - 80jpegxl: JPEG - 90jpegxl: PNG - 100jpegxl: JPEG - 100xmrig: Monero - 1Mxmrig: Wownero - 1Mminibude: OpenMP - BM1minibude: OpenMP - BM1minibude: OpenMP - BM2minibude: OpenMP - BM2nekrs: TurboPipe Periodicopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testopenradioss: Bird Strike on Windshieldopenradioss: Rubber O-Ring Seal Installationopenradioss: INIVOL and Fluid Structure Interaction Drop Containertensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNety-cruncher: 500Mtensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50y-cruncher: 1Btensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 512 - GoogLeNetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamencodec: 3 kbpsdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamencodec: 24 kbpsdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamspacy: en_core_web_lgspacy: en_core_web_trfonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timecpuminer-opt: Magicpuminer-opt: x25xcpuminer-opt: scryptcpuminer-opt: Deepcoincpuminer-opt: Ringcoincpuminer-opt: Blake-2 Scpuminer-opt: Garlicoincpuminer-opt: Skeincoincpuminer-opt: Myriad-Groestlcpuminer-opt: LBC, LBRY Creditscpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Triple SHA-256, Onecoinaom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pffmpeg: libx264 - Liveffmpeg: libx264 - Liveffmpeg: libx265 - Liveffmpeg: libx265 - Liveffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadffmpeg: libx264 - Platformffmpeg: libx265 - Platformffmpeg: libx265 - Platformffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandavifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, Losslessquadray: 1 - 4Kquadray: 2 - 4Kquadray: 3 - 4Kquadray: 5 - 4Kquadray: 1 - 1080pquadray: 2 - 1080pquadray: 3 - 1080pquadray: 5 - 1080plibplacebo: polar_nocomputelibplacebo: hdr_peakdetectlibplacebo: hdr_lutlibplacebo: av1_grain_lapnginx: 100nginx: 200nginx: 500nginx: 1000encodec: 6 kbpsffmpeg: libx264 - Platformencodec: 1.5 kbpslibplacebo: deband_heavysmhasher: wyhashsmhasher: SHA3-256smhasher: Spooky32smhasher: fasthash32smhasher: FarmHash128smhasher: t1ha2_atoncesmhasher: FarmHash32 x86_64 AVXsmhasher: t1ha0_aes_avx2 x86_64smhasher: MeowHash x86_64 AES-NIaaab23915.75149.0814909.026658.9215884.2416686.5729093.0366796.4637183.28294.05260.682600010.05747.316319425.86575564.9622523.7913779284.6240818.925126.99215308.14158.8932900.392463565.6861078.0690999.653599.428595.24076744.522048965.25180.777925418.1517.25548.69146.639.089.028.688.620.690.687982.610466.6405.81616.233409.85516.39431559900000127.42104.51285.69131633.9755.6376.2293.819.547109.23110.3631.8910.3930.4610.2930.5510.0941.52428.6428.799.7748611.84047.9776125.345533.2144180.605220.620847.23148.486952.8476113.460342.430923.5574112.281553.410254.41678.388912.749981.37473.708559.448516.815940.9537146.479329.948133.38539.7133617.64578.0129124.7919121968324.6902612.060222.54498.22745.274354075.572503.121.881741.823568363.21657603.58631.15226.42117102705.275246801939.6510688021130312601021702049700.257.4123.112.6734.4647.4247.550.7315.0845.241.5785.22108.47109.5224.66204.7566.9975.38199.73024137712.64164.79743165715.32155.11241.92887217931.31155.13773228548.83240.83964117931.45130.35763.1216.04910.1495.4810.212.892.460.6639.4811.129.712.64977.722610.22797.232073.6987222.6883513.1475259.8165135.6347.32648.8445.14476.5925.3092598.0549.05636.90264.10332.25440.06934.81157.7424070.66148.1415026.66917.3216614.2815706.9228582.2269896.4137904.32292.1261.262717409.38773.076371966.37575708.5322525.1413924963.4340572.798765.88215163.99152.4631954.782465656.1961203.2190998.923592.788500.174103283.772035501.44192.237932169.4417.32647.21148.619.139.038.988.760.690.687945.810640406.70616.268410.30716.41232575200000126.51104.11286.84130.82650.2555.4776.2593.8819.527109.38110.4931.8310.4430.5810.2930.5110.0641.84128.6728.719.7085617.74057.989125.165333.0595181.455620.310447.65949.22852.7812113.642842.414923.5672112.256753.405254.42678.408812.746381.04474.011759.61916.767841.0572146.110830.004133.32289.723617.03458.0151124.7582122178334.7485912.028822.55045.471255.250724145.832484.91.2789541.852307361.47957603.33631.74226.59117202709.035280601960.7110862021260318201021802049500.257.4523.212.6834.9147.6247.950.7315.0146.9842.1785.9108.91111.4924.63205.0366.8975.50199.8112.64164.7215.33154.57240.52017595531.49155.05813851748.85239.5931.62131.5764.0196.07610.0945.5059.632.92.430.6638.511.229.812.62948.282598.972806.162106.7386726.184210.4276617.765507.9248.56449.0146.082479.6525.292620.22148.9835.25760.33634.54841.13432.87556.97623571.26151.7614856.106686.2516361.6416085.4827788.5168310.6638610.17293.29260.022563831.67770.896344621.56571674.6822434.6913767394.3340197.64168.54205666.12149.7930793.532466594.7358814.3887587.663456.538961.23796353.991531955.88184.777654922.0517.20948.24148.599.129.078.848.800.690.688175.810577.3404.24716.170410.14116.40532544333333127.37104.27287.33130.06631.3355.4976.3793.7919.531109.30110.0231.8510.4430.4810.3330.5010.1241.42628.6528.339.6761618.13637.9824125.269132.914182.256920.202648.4549.490852.4162114.403742.362723.596112.122353.439854.86278.306212.763481.309273.766659.413816.825840.9084146.641429.988633.34049.8238609.83457.9966125.0469119785504.7308612.031322.52106.223965.287944147.652481.611.7412142.077007359.99903600.87628.47225.89116602730.305194332112.4110566721397311431020172051000.257.4123.1312.7134.6047.6048.230.7315.1245.6442.0784.93107.41110.2624.67204.7367.2675.10199.3512.67164.2615.37155.81240.0931.55155.3148.77240.0331.56130.14964.3736.01910.1525.49710.142.862.490.6639.2611.179.702.63961.482603.192840.162093.7083591.6582789.4276562.0566302.8648.21548.6246.624474.5325.9142564.20649.50036.68861.81233.67942.57533.61255.875OpenBenchmarking.org

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashaaab5K10K15K20K25KSE +/- 338.95, N = 324070.6623915.7523571.261. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashaaab4K8K12K16K20KMin: 23093.48 / Avg: 23571.26 / Max: 24226.631. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256baaa306090120150SE +/- 1.41, N = 7151.76149.08148.141. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256baaa306090120150Min: 147.24 / Avg: 151.76 / Max: 157.611. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32aaab3K6K9K12K15KSE +/- 98.93, N = 1515026.6014909.0214856.101. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32aaab3K6K9K12K15KMin: 14365.69 / Avg: 14856.1 / Max: 15403.121. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32aaba15003000450060007500SE +/- 77.96, N = 46917.326686.256658.921. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32aaba12002400360048006000Min: 6583.69 / Avg: 6686.25 / Max: 6918.61. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128aaba4K8K12K16K20KSE +/- 128.88, N = 1516614.2816361.6415884.241. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128aaba3K6K9K12K15KMin: 15711.76 / Avg: 16361.64 / Max: 17016.291. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceabaa4K8K12K16K20KSE +/- 116.42, N = 1516686.5716085.4815706.921. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceabaa3K6K9K12K15KMin: 15518.5 / Avg: 16085.48 / Max: 16734.811. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXaaab6K12K18K24K30KSE +/- 369.00, N = 329093.0328582.2227788.511. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXaaab5K10K15K20K25KMin: 27350.75 / Avg: 27788.51 / Max: 28521.941. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64aaba15K30K45K60K75KSE +/- 549.14, N = 1569896.4168310.6666796.461. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64aaba12K24K36K48K60KMin: 64675.04 / Avg: 68310.66 / Max: 71530.281. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIbaaa8K16K24K32K40KSE +/- 285.07, N = 1538610.1737904.3237183.281. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIbaaa7K14K21K28K35KMin: 37361.23 / Avg: 38610.17 / Max: 40418.621. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPabaa60120180240300294.05293.29292.101. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAaaab60120180240300261.26260.68260.021. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Futexaaab600K1200K1800K2400K3000K2717409.382600010.052563831.671. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDaaba170340510680850773.07770.89747.311. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Mutexaaba1.4M2.8M4.2M5.6M7M6371966.376344621.566319425.861. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Atomicaaab120K240K360K480K600K575708.53575564.96571674.681. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Cryptoaaab5K10K15K20K25K22525.1422523.7922434.691. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Mallocaaab3M6M9M12M15M13924963.4313779284.6213767394.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Forkingaaab9K18K27K36K45K40818.9240572.7940197.601. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringaaab2K4K6K8K10K8765.885126.994168.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEaaab50K100K150K200K250K215308.14215163.99205666.121. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU Cacheaaab4080120160200158.89152.46149.791. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU Stressaaab7K14K21K28K35K32900.3931954.7830793.531. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Semaphoresbaaa500K1000K1500K2000K2500K2466594.732465656.192463565.681. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix Mathaaab13K26K39K52K65K61203.2161078.0658814.381. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector Mathaaab20K40K60K80K100K90999.6590998.9287587.661. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory Copyingaaab80016002400320040003599.423592.783456.531. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket Activitybaaa2K4K6K8K10K8961.208595.208500.171. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Context Switchingaaab900K1800K2700K3600K4500K4103283.774076744.523796353.991. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String Functionsaaab400K800K1200K1600K2000K2048965.252035501.441531955.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data Sortingaaba4080120160200192.23184.77180.771. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message Passingaaab2M4M6M8M10M7932169.447925418.157654922.051. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACbaaa48121620SE +/- 0.04, N = 517.2117.2617.331. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACbaaa48121620Min: 17.06 / Avg: 17.21 / Max: 17.321. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1abaa1122334455SE +/- 0.56, N = 348.6948.2447.21
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1abaa1020304050Min: 47.14 / Avg: 48.24 / Max: 48.91

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: Allaaba306090120150SE +/- 0.27, N = 3148.61148.59146.63
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: Allaaba306090120150Min: 148.18 / Avg: 148.59 / Max: 149.1

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80aaba3691215SE +/- 0.02, N = 39.139.129.081. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80aaba3691215Min: 9.09 / Avg: 9.12 / Max: 9.171. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90baaa3691215SE +/- 0.02, N = 39.079.039.021. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90baaa3691215Min: 9.03 / Avg: 9.07 / Max: 9.111. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80aaba3691215SE +/- 0.03, N = 38.988.848.681. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80aaba3691215Min: 8.78 / Avg: 8.84 / Max: 8.91. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90baaa246810SE +/- 0.08, N = 38.808.768.621. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90baaa3691215Min: 8.71 / Avg: 8.8 / Max: 8.951. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100baaa0.15530.31060.46590.62120.7765SE +/- 0.01, N = 30.690.690.691. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100baaa246810Min: 0.68 / Avg: 0.69 / Max: 0.71. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100baaa0.1530.3060.4590.6120.765SE +/- 0.00, N = 30.680.680.681. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100baaa246810Min: 0.68 / Avg: 0.68 / Max: 0.691. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1Mbaaa2K4K6K8K10KSE +/- 88.16, N = 38175.87982.67945.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1Mbaaa14002800420056007000Min: 8048 / Avg: 8175.8 / Max: 8344.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1Maaba2K4K6K8K10KSE +/- 4.48, N = 310640.010577.310466.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1Maaba2K4K6K8K10KMin: 10571.2 / Avg: 10577.27 / Max: 105861. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1aaab90180270360450SE +/- 0.95, N = 3406.71405.82404.251. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1aaab70140210280350Min: 402.9 / Avg: 404.25 / Max: 406.091. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1aaab48121620SE +/- 0.04, N = 316.2716.2316.171. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1aaab48121620Min: 16.12 / Avg: 16.17 / Max: 16.241. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2aaba90180270360450SE +/- 0.03, N = 3410.31410.14409.861. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2aaba70140210280350Min: 410.08 / Avg: 410.14 / Max: 410.181. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2aaba48121620SE +/- 0.00, N = 316.4116.4116.391. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2aaba48121620Min: 16.4 / Avg: 16.41 / Max: 16.411. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodicaaba7000M14000M21000M28000M35000MSE +/- 20784155.29, N = 33257520000032544333333315599000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodicaaba6000M12000M18000M24000M30000MMin: 32512500000 / Avg: 32544333333.33 / Max: 325834000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beamaaba306090120150SE +/- 0.16, N = 3126.51127.37127.42
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beamaaba20406080100Min: 127.18 / Avg: 127.37 / Max: 127.68

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testaaba20406080100SE +/- 0.05, N = 3104.11104.27104.51
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testaaba20406080100Min: 104.21 / Avg: 104.27 / Max: 104.36

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldaaab60120180240300SE +/- 0.10, N = 3285.69286.84287.33
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldaaab50100150200250Min: 287.2 / Avg: 287.33 / Max: 287.52

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationbaaa306090120150SE +/- 0.35, N = 3130.06130.82131.00
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationbaaa20406080100Min: 129.66 / Avg: 130.06 / Max: 130.76

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Containerbaaa140280420560700SE +/- 0.82, N = 3631.33633.97650.25
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Containerbaaa110220330440550Min: 629.8 / Avg: 631.33 / Max: 632.59

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetabaa1224364860SE +/- 0.04, N = 355.6355.4955.47
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetabaa1122334455Min: 55.44 / Avg: 55.49 / Max: 55.57

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetbaaa20406080100SE +/- 0.07, N = 376.3776.2576.22
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetbaaa1530456075Min: 76.26 / Avg: 76.37 / Max: 76.49

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetaaab20406080100SE +/- 0.06, N = 393.8893.8093.79
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetaaab20406080100Min: 93.68 / Avg: 93.79 / Max: 93.86

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500Maaba510152025SE +/- 0.02, N = 319.5319.5319.55
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500Maaba510152025Min: 19.5 / Avg: 19.53 / Max: 19.56

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetaaba20406080100SE +/- 0.06, N = 3109.38109.30109.23
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetaaba20406080100Min: 109.18 / Avg: 109.3 / Max: 109.36

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetaaab20406080100SE +/- 0.06, N = 3110.49110.36110.02
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetaaab20406080100Min: 109.95 / Avg: 110.02 / Max: 110.13

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetabaa714212835SE +/- 0.03, N = 331.8931.8531.83
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetabaa714212835Min: 31.79 / Avg: 31.85 / Max: 31.91

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50baaa3691215SE +/- 0.02, N = 310.4410.4410.39
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50baaa3691215Min: 10.42 / Avg: 10.44 / Max: 10.47

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetaaba714212835SE +/- 0.01, N = 330.5830.4830.46
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetaaba714212835Min: 30.46 / Avg: 30.48 / Max: 30.49

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50baaa3691215SE +/- 0.01, N = 310.3310.2910.29
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50baaa3691215Min: 10.32 / Avg: 10.33 / Max: 10.35

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetaaab714212835SE +/- 0.01, N = 330.5530.5130.50
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetaaab714212835Min: 30.49 / Avg: 30.5 / Max: 30.51

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50baaa369121510.1210.0910.06

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1Bbaaa1020304050SE +/- 0.11, N = 341.4341.5241.84
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1Bbaaa918273645Min: 41.3 / Avg: 41.43 / Max: 41.64

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetaaba71421283528.6728.6528.64

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetaaab71421283528.7928.7128.33

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamaaab36912159.77489.70859.6761

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamaaab130260390520650611.84617.74618.14

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamaaba2468107.98907.98247.9776

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamaaba306090120150125.17125.27125.35

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamaaab81624324033.2133.0632.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamaaab4080120160200180.61181.46182.26

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamaaab51015202520.6220.3120.20

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 3 kbpsaaab112233445547.2347.6648.45

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamaaab112233445548.4949.2349.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Streamaaab122436486052.8552.7852.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Streamaaab306090120150113.46113.64114.40

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Streamaaab102030405042.4342.4142.36

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Streamaaab61218243023.5623.5723.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamaaab306090120150112.28112.26112.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamaaab122436486053.4153.4153.44

EnCodec

EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterEnCodec 0.1.1Target Bandwidth: 24 kbpsaaab122436486054.4254.4354.86

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamaaab2040608010078.4178.3978.31

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamaaab369121512.7512.7512.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabaa2040608010081.3781.3181.04

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabaa163248648073.7173.7774.01

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamaaab132639526559.6259.4559.41

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamaaab4812162016.7716.8216.83

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamaaab91827364541.0640.9540.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamaaab306090120150146.11146.48146.64

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamaaba71421283530.0029.9929.95

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamaaba81624324033.3233.3433.39

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambaaa36912159.82389.72309.7133

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambaaa130260390520650609.83617.03617.65

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamaaab2468108.01518.01297.9966

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamaaab306090120150124.76124.79125.05

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgaaab3K6K9K12K15K122171219611978

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfaaab2004006008001000833832550

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUabaa1.06842.13683.20524.27365.342SE +/- 0.01435, N = 34.690264.730864.74859MIN: 4.49MIN: 4.49MIN: 4.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUabaa246810Min: 4.72 / Avg: 4.73 / Max: 4.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUaaba3691215SE +/- 0.01, N = 312.0312.0312.06MIN: 11.93MIN: 11.91MIN: 11.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUaaba48121620Min: 12.02 / Avg: 12.03 / Max: 12.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUbaaa510152025SE +/- 0.02, N = 322.5222.5422.55MIN: 21.81MIN: 22.24MIN: 22.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUbaaa510152025Min: 22.48 / Avg: 22.52 / Max: 22.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUaaba246810SE +/- 0.14562, N = 155.471256.223968.22740MIN: 4.57MIN: 4.61MIN: 71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUaaba3691215Min: 5.48 / Avg: 6.22 / Max: 7.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUaaab1.18982.37963.56944.75925.949SE +/- 0.00411, N = 35.250725.274355.28794MIN: 5.17MIN: 5.18MIN: 5.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUaaab246810Min: 5.28 / Avg: 5.29 / Max: 5.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUaaab9001800270036004500SE +/- 9.23, N = 34075.574145.834147.65MIN: 4066.65MIN: 4135.73MIN: 4122.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUaaab7001400210028003500Min: 4131.73 / Avg: 4147.65 / Max: 4163.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUbaaa5001000150020002500SE +/- 1.05, N = 32481.612484.902503.12MIN: 2470.87MIN: 2478.59MIN: 2492.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUbaaa400800120016002000Min: 2480.52 / Avg: 2481.61 / Max: 2483.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUaaba0.42340.84681.27021.69362.117SE +/- 0.12895, N = 151.278951.741211.88170MIN: 1.15MIN: 1.02MIN: 1.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUaaba246810Min: 1.08 / Avg: 1.74 / Max: 2.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timeaaab102030405041.8241.8542.081. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timebaaa80160240320400360.00361.48363.221. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magiaaab130260390520650SE +/- 0.50, N = 3603.58603.33600.871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magiaaab110220330440550Min: 599.98 / Avg: 600.87 / Max: 601.721. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xaaab140280420560700SE +/- 0.96, N = 3631.74631.15628.471. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xaaab110220330440550Min: 627.08 / Avg: 628.47 / Max: 630.321. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptaaab50100150200250SE +/- 0.18, N = 3226.59226.42225.891. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptaaab4080120160200Min: 225.69 / Avg: 225.89 / Max: 226.251. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoinaaab3K6K9K12K15KSE +/- 5.77, N = 31172011710116601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoinaaab2K4K6K8K10KMin: 11650 / Avg: 11660 / Max: 116701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoinbaaa6001200180024003000SE +/- 7.56, N = 32730.302709.032705.271. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoinbaaa5001000150020002500Min: 2719.2 / Avg: 2730.3 / Max: 2744.751. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 Saaab110K220K330K440K550KSE +/- 4099.93, N = 105280605246805194331. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 Saaab90K180K270K360K450KMin: 485330 / Avg: 519433 / Max: 5305801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoinbaaa5001000150020002500SE +/- 24.17, N = 152112.411960.711939.651. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoinbaaa400800120016002000Min: 1974.54 / Avg: 2112.41 / Max: 2248.241. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoinaaab20K40K60K80K100KSE +/- 1171.67, N = 31086201068801056671. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoinaaab20K40K60K80K100KMin: 104490 / Avg: 105666.67 / Max: 1080101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestlbaaa5K10K15K20K25KSE +/- 153.44, N = 32139721260211301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestlbaaa4K8K12K16K20KMin: 21090 / Avg: 21396.67 / Max: 215601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Creditsaaab7K14K21K28K35KSE +/- 18.56, N = 33182031260311431. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Creditsaaab6K12K18K24K30KMin: 31120 / Avg: 31143.33 / Max: 311801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyriteaaab20K40K60K80K100KSE +/- 58.12, N = 31021801021701020171. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyriteaaab20K40K60K80K100KMin: 101910 / Avg: 102016.67 / Max: 1021101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoinbaaa40K80K120K160K200KSE +/- 26.46, N = 32051002049702049501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoinbaaa40K80K120K160K200KMin: 205060 / Avg: 205100 / Max: 2051501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kbaaa0.05630.11260.16890.22520.2815SE +/- 0.00, N = 30.250.250.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kbaaa12345Min: 0.25 / Avg: 0.25 / Max: 0.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kaaba246810SE +/- 0.01, N = 37.457.417.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kaaba3691215Min: 7.4 / Avg: 7.41 / Max: 7.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kaaba612182430SE +/- 0.06, N = 323.2023.1323.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kaaba510152025Min: 23.02 / Avg: 23.13 / Max: 23.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kbaaa3691215SE +/- 0.06, N = 312.7112.6812.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kbaaa48121620Min: 12.6 / Avg: 12.71 / Max: 12.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kaaba816243240SE +/- 0.08, N = 334.9134.6034.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kaaba714212835Min: 34.48 / Avg: 34.6 / Max: 34.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kaaba1122334455SE +/- 0.12, N = 347.6247.6047.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kaaba1020304050Min: 47.36 / Avg: 47.6 / Max: 47.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kbaaa1122334455SE +/- 0.11, N = 348.2347.9547.551. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kbaaa1020304050Min: 48.01 / Avg: 48.23 / Max: 48.41. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pbaaa0.16430.32860.49290.65720.8215SE +/- 0.00, N = 30.730.730.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pbaaa246810Min: 0.73 / Avg: 0.73 / Max: 0.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pbaaa48121620SE +/- 0.08, N = 315.1215.0815.011. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pbaaa48121620Min: 14.99 / Avg: 15.12 / Max: 15.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080paaba1122334455SE +/- 0.20, N = 346.9845.6445.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080paaba1020304050Min: 45.33 / Avg: 45.64 / Max: 46.021. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080paaba1020304050SE +/- 0.26, N = 342.1742.0741.571. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080paaba918273645Min: 41.57 / Avg: 42.07 / Max: 42.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080paaab20406080100SE +/- 0.06, N = 385.9085.2284.931. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080paaab1632486480Min: 84.83 / Avg: 84.93 / Max: 85.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080paaab20406080100SE +/- 0.48, N = 3108.91108.47107.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080paaab20406080100Min: 106.7 / Avg: 107.41 / Max: 108.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080paaba20406080100SE +/- 0.74, N = 3111.49110.26109.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080paaba20406080100Min: 109.19 / Avg: 110.26 / Max: 111.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Liveaaab612182430SE +/- 0.05, N = 324.6324.6624.671. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Liveaaab612182430Min: 24.58 / Avg: 24.67 / Max: 24.751. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Liveaaab4080120160200SE +/- 0.39, N = 3205.03204.75204.731. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Liveaaab4080120160200Min: 204.08 / Avg: 204.73 / Max: 205.411. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Liveaaab1530456075SE +/- 0.65, N = 366.8966.9967.261. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Liveaaab1326395265Min: 65.97 / Avg: 67.26 / Max: 67.991. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Liveaaab20406080100SE +/- 0.73, N = 375.5075.3875.101. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Liveaaab1530456075Min: 74.28 / Avg: 75.1 / Max: 76.551. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Uploadbaaa4080120160200SE +/- 0.94, N = 3199.35199.73199.811. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Uploadbaaa4080120160200Min: 197.77 / Avg: 199.35 / Max: 201.021. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Uploadbaaa3691215SE +/- 0.06, N = 312.6712.6412.641. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Uploadbaaa48121620Min: 12.56 / Avg: 12.67 / Max: 12.771. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Uploadbaaa4080120160200SE +/- 0.31, N = 3164.26164.72164.801. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Uploadbaaa306090120150Min: 163.66 / Avg: 164.26 / Max: 164.681. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Uploadbaaa48121620SE +/- 0.03, N = 315.3715.3315.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Uploadbaaa48121620Min: 15.33 / Avg: 15.37 / Max: 15.431. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Platformaaab306090120150SE +/- 0.25, N = 3154.57155.11155.811. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Platformaaab306090120150Min: 155.43 / Avg: 155.81 / Max: 156.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Platformbaaa50100150200250SE +/- 0.20, N = 3240.09240.52241.931. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Platformbaaa4080120160200Min: 239.74 / Avg: 240.09 / Max: 240.431. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Platformbaaa714212835SE +/- 0.03, N = 331.5531.4931.311. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Platformbaaa714212835Min: 31.51 / Avg: 31.55 / Max: 31.61. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On Demandaaab306090120150SE +/- 0.30, N = 3155.06155.14155.311. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On Demandaaab306090120150Min: 154.78 / Avg: 155.31 / Max: 155.821. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On Demandaaab1122334455SE +/- 0.09, N = 348.8548.8348.771. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx264 - Scenario: Video On Demandaaab1020304050Min: 48.61 / Avg: 48.77 / Max: 48.941. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On Demandaaba50100150200250SE +/- 0.53, N = 3239.59240.03240.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgSeconds, Fewer Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On Demandaaba4080120160200Min: 239.17 / Avg: 240.03 / Max: 241.011. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On Demandaaba714212835SE +/- 0.07, N = 331.6231.5631.451. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFPS, More Is BetterFFmpeg 5.1.2Encoder: libx265 - Scenario: Video On Demandaaba714212835Min: 31.43 / Avg: 31.56 / Max: 31.671. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0baaa306090120150SE +/- 0.52, N = 3130.15130.36131.571. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0baaa20406080100Min: 129.47 / Avg: 130.15 / Max: 131.161. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2aaab1428425670SE +/- 0.20, N = 363.1264.0264.371. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2aaab1326395265Min: 63.98 / Avg: 64.37 / Max: 64.671. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6baaa246810SE +/- 0.028, N = 36.0196.0496.0761. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6baaa246810Min: 5.96 / Avg: 6.02 / Max: 6.051. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Losslessaaab3691215SE +/- 0.04, N = 310.0910.1510.151. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Losslessaaab3691215Min: 10.09 / Avg: 10.15 / Max: 10.211. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Losslessabaa1.23862.47723.71584.95446.193SE +/- 0.026, N = 35.4805.4975.5051. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Losslessabaa246810Min: 5.45 / Avg: 5.5 / Max: 5.541. (CXX) g++ options: -O3 -fPIC -lm

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4Kabaa3691215SE +/- 0.04, N = 310.2110.149.631. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4Kabaa3691215Min: 10.06 / Avg: 10.14 / Max: 10.211. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4Kaaab0.65251.3051.95752.613.2625SE +/- 0.01, N = 32.902.892.861. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4Kaaab246810Min: 2.85 / Avg: 2.86 / Max: 2.871. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4Kbaaa0.56031.12061.68092.24122.8015SE +/- 0.01, N = 32.492.462.431. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4Kbaaa246810Min: 2.47 / Avg: 2.49 / Max: 2.511. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4Kbaaa0.14850.2970.44550.5940.7425