new-tests

Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2401110-NE-NEWTESTS900
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C++ Boost Tests 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 7 Tests
Creator Workloads 6 Tests
Encoding 2 Tests
HPC - High Performance Computing 6 Tests
Machine Learning 5 Tests
Multi-Core 10 Tests
Intel oneAPI 3 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 6 Tests
Renderers 2 Tests
Server 2 Tests
Server CPU Tests 5 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Zen 1 - EPYC 7601
January 07
  46 Minutes
b
January 10
  12 Minutes
c
January 10
  12 Minutes
32
January 11
  2 Hours, 56 Minutes
32 z
January 11
  2 Hours, 56 Minutes
32 c
January 11
  3 Hours, 14 Minutes
32 d
January 11
  2 Hours, 55 Minutes
Invert Hiding All Results Option
  1 Hour, 53 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


new-testsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionZen 1 - EPYC 7601bc3232 z32 c32 dAMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h128GB280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8llvmpipeVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.6.9-060609-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.74.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits)GCC 13.2.0ext41920x1080AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a46 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG1000GB INTEL SSDPE2KX010T81920x1200AMD EPYC 8534PN 32-Core @ 2.05GHz (32 Cores / 64 Threads)ASPEEDAMD EPYC 8324P 32-Core @ 2.65GHz (32 Cores / 64 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 z: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212Security Details- Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 z: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 d: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Java Details- 32, 32 z, 32 c, 32 d: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Details- 32, 32 z, 32 c, 32 d: Python 3.11.6

Zen 1 - EPYC 7601bc3232 z32 c32 dLogarithmic Result OverviewPhoronix Test SuiteY-CruncherY-CruncherQuicksilverQuicksilverQuicksilver1B500MCORAL2 P1CTS2CORAL2 P2

new-testsquicksilver: CORAL2 P1quicksilver: CORAL2 P2quicksilver: CTS2y-cruncher: 500My-cruncher: 1Bquantlib: Multi-Threadedopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeffmpeg: libx265 - Liveffmpeg: libx265 - Uploadffmpeg: libx265 - Platformffmpeg: libx265 - Video On Demandxmrig: KawPow - 1Mxmrig: Monero - 1Mxmrig: Wownero - 1Mxmrig: GhostRider - 1Mxmrig: CryptoNight-Heavy - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mdacapobench: Jythondacapobench: Eclipsedacapobench: GraphChidacapobench: Tradesoapdacapobench: Tradebeansdacapobench: Spring Bootdacapobench: Apache Kafkadacapobench: Apache Tomcatdacapobench: jMonkeyEnginedacapobench: Apache Cassandradacapobench: Apache Xalan XSLTdacapobench: Batik SVG Toolkitdacapobench: H2 Database Enginedacapobench: FOP Print Formatterdacapobench: PMD Source Code Analyzerdacapobench: Apache Lucene Search Indexdacapobench: Apache Lucene Search Enginedacapobench: Avrora AVR Simulation Frameworkdacapobench: BioJava Biological Data Frameworkdacapobench: Zxing 1D/2D Barcode Image Processingdacapobench: H2O In-Memory Platform For Machine Learningcachebench: Readcachebench: Writecachebench: Read / Modify / Writeembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Kcompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingbuild-ffmpeg: Time To Compilebuild-gem5: Time To Compilebuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 3 - 4K - 16 - Path Tracer - CPUospray-studio: 3 - 4K - 32 - Path Tracer - CPUpytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_ltensorflow: CPU - 1 - VGG-16tensorflow: CPU - 1 - AlexNettensorflow: CPU - 16 - VGG-16tensorflow: CPU - 16 - AlexNettensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Rand Readrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randspeedb: Rand Readspeedb: Update Randspeedb: Read While Writingspeedb: Read Rand Write Randllama-cpp: llama-2-7b.Q4_0.ggufllama-cpp: llama-2-13b.Q4_0.ggufllama-cpp: llama-2-70b-chat.Q5_0.ggufZen 1 - EPYC 7601bc3232 z32 c32 d12996667150133331142666715.69333.9232118000016140000162700005.20210.4162125000016150000162600005.21310.4761879000015350000143200005.65611.676107079.228.37258372.807288109.8422.2845.1345.1818777.218845.525814.44067.419004.518860.167031265635365403856124445110210769145946871173326757511784461314025613787460939747616.08733445646.09135387227.58771336.958437.296741.595837.28445.937438.93785.80148.451186.625185.66524154521220923.557254.0152.133433.78934043451404960673116377619871165667136113646452.4419.0440.1915.619.857.179.7332.1225.15272.9328.998.74158.4751.3421.2933747.0674836.421419.1107266.857459.86382208.15377.2332123.0157129.874926.0566607.935266.879959.8833123.8817128.8158182.508587.490840.1688396.2914383.974641.631421.2278747.31444.73112.0355.65410.61139.0917.17929.23151.45105.48150.8105.971190.4213.3632.82486.653921.53.91576.1827.691960.188.071704.2618.695747.655.41666.2223.95199.979.823300.999.561741.579.12898.635.5140123.620.6574542.8752441.940.48176770468630575428469123736541796859543141237457600223140329.7517.943.421876000015230000142900005.68511.595107381.630.7547271.201285110.3722.2045.0545.0818961.318763.825943.74038.618936.51890967731273536305168860024605121208269175938859172326556961820458914255441785859938687616.33414245646.81610787218.21097437.254537.679141.819836.858646.308839.1075.89958.715185.562184.98124239921158423.759272.6152.012434.18734063446404861430115669621131169727149513631252.7818.9239.9615.519.827.119.7531.9225.2274.9728.718.77155.7751.5721.2711745.1806835.26219.1338266.976159.86732199.49417.261122.955129.803526.0768608.1326267.841759.6674123.785128.845182.764387.32539.9467397.9593385.648141.437721.289746.12844.48112.0955.54410.43138.617.18927.57150.06106.44150.37106.241197.4613.2932.81486.033924.863.9579.4127.531964.998.051704.0218.695751.585.42666.323.95201.1579.393299.939.561735.649.16896.6935.5940101.80.66730.8243.7152475.390.47177167636636242436499623612701794349243141147210235225934429.917.873.41104000015180000144300005.78311.90298916.230.53759172.384007110.0222.2145.1344.9518947.318897.525385.94136.318783.918887.568651282635385366852025335111209469175955852171827737641966458013795561790456939797615.94808645645.09113387238.01319735.914736.996741.569637.440545.464839.00465.82947.253180.955183.89924028721181524.446258.30753.615453.69334933515415762802118221634021189807302413968553.0018.8640.3215.3210.047.189.7733.1424.47274.9727.738.61157.651.5620.8729753.1229816.278519.5831266.842859.90162195.91987.2738122.3307130.475525.8175611.6026266.03460.0613123.1469129.5421181.104388.195238.7708411.3435384.316441.588921.0419751.925947.52119.7259.58426.3148.7416.51964.2150.07106.43150.84105.911166.5613.6531.22510.793877.914.03554.6828.771860.998.521627.9319.585416.315.78632.9225.22194.2182.183099.210.221694.019.39853.3837.439562.870.67692.0246.1752382.310.48160665305633688441949723278001632027213177587746346222949429.7417.873.421884000015100000142800005.75111.97598618.730.72419472.305836110.2922.2244.9745.1018901.118866.125396.84095.71892418818.667691276836565149838024525114211269165927861173826347581833460214335572790759937557615.83314545643.03871387854.11767236.281236.936941.55737.405645.648239.14215.97758.642186.368184.09924119121138324.3258.93453.632452.60634993522413263336118802627871197837332913944553.3018.8640.3115.3510.217.159.7533.0224.51276.1928.798.59158.0851.4921.0932751.2117815.976819.5858266.534359.96982189.06557.2896121.8001130.793725.7874611.4439266.277660.0284122.9312129.8101181.115588.227838.8343410.3267381.783941.843821.0667750.399747.41119.5759.79426.37148.5616.54965.35151.25105.64150.25106.321166.8313.6531.2510.93869.74.03553.6528.821862.248.521628.9119.565423.135.78634.525.16195.0581.873100.9510.211696.59.37848.6237.6139843.050.67690.2446.2852344.60.48160707812630478424447823515681635124323136837105602221589629.8518.083.42OpenBenchmarking.org

Quicksilver

Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P132 d32 c32 z32cbZen 1 - EPYC 76015M10M15M20M25MSE +/- 66916.20, N = 318840000104000018760000187900002125000021180000129966671. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P132 d32 c32 z32cbZen 1 - EPYC 76014M8M12M16M20MMin: 12890000 / Avg: 12996666.67 / Max: 131200001. (CXX) g++ options: -fopenmp -O3 -march=native

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P232 d32 c32 z32cbZen 1 - EPYC 76013M6M9M12M15MSE +/- 37118.43, N = 3151000001518000015230000153500001615000016140000150133331. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P232 d32 c32 z32cbZen 1 - EPYC 76013M6M9M12M15MMin: 14940000 / Avg: 15013333.33 / Max: 150600001. (CXX) g++ options: -fopenmp -O3 -march=native

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS232 d32 c32 z32cbZen 1 - EPYC 76013M6M9M12M15MSE +/- 16666.67, N = 3142800001443000014290000143200001626000016270000114266671. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS232 d32 c32 z32cbZen 1 - EPYC 76013M6M9M12M15MMin: 11410000 / Avg: 11426666.67 / Max: 114600001. (CXX) g++ options: -fopenmp -O3 -march=native

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500M32 d32 c32 z32cbZen 1 - EPYC 760148121620SE +/- 0.118, N = 35.7515.7835.6855.6565.2135.20215.693
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500M32 d32 c32 z32cbZen 1 - EPYC 760148121620Min: 15.47 / Avg: 15.69 / Max: 15.86

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1B32 d32 c32 z32cbZen 1 - EPYC 7601816243240SE +/- 0.09, N = 311.9811.9011.6011.6810.4810.4233.92
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1B32 d32 c32 z32cbZen 1 - EPYC 7601714212835Min: 33.8 / Avg: 33.92 / Max: 34.1

Meta Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsZen 1 - EPYC 76013M6M9M12M15M13064001.66

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringZen 1 - EPYC 7601130260390520650Min: 242.58 / Avg: 585.92 / Max: 718

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threaded32 d32 c32 z3220K40K60K80K100K98618.798916.2107381.6107079.21. (CXX) g++ options: -O3 -march=native -fPIE -pie

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time32 d32 c32 z3271421283530.7230.5430.7528.371. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Time32 d32 c32 z32163248648072.3172.3871.2072.811. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

FFmpeg

This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Live32 d32 c32 z3220406080100110.29110.02110.37109.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Upload32 d32 c32 z3251015202522.2222.2122.2022.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Platform32 d32 c32 z32102030405044.9745.1345.0545.131. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On Demand32 d32 c32 z32102030405045.1044.9545.0845.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1M32 d32 c32 z324K8K12K16K20K18901.118947.318961.318777.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1M32 d32 c32 z324K8K12K16K20K18866.118897.518763.818845.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1M32 d32 c32 z326K12K18K24K30K25396.825385.925943.725814.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1M32 d32 c32 z3290018002700360045004095.74136.34038.64067.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1M32 d32 c32 z324K8K12K16K20K18924.018783.918936.519004.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1M32 d32 c32 z324K8K12K16K20K18818.618887.518909.018860.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Jython32 d32 c32 z32150030004500600075006769686567736703

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Eclipse32 d32 c32 z323K6K9K12K15K12768128261273512656

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: GraphChi32 d32 c32 z3280016002400320040003656353836303536

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradesoap32 d32 c32 z32120024003600480060005149536651685403

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradebeans32 d32 c32 z322K4K6K8K10K8380852086008561

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Spring Boot32 d32 c32 z3250010001500200025002452253324602444

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Kafka32 d32 c32 z32110022003300440055005114511151215110

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Tomcat32 d32 c32 z3250010001500200025002112209420822107

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: jMonkeyEngine32 d32 c32 z32150030004500600075006916691769176914

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Cassandra32 d32 c32 z32130026003900520065005927595559385946

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Xalan XSLT32 d32 c32 z322004006008001000861852859871

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Batik SVG Toolkit32 d32 c32 z324008001200160020001738171817231733

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2 Database Engine32 d32 c32 z3260012001800240030002634277326552675

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: FOP Print Formatter32 d32 c32 z32160320480640800758764696751

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: PMD Source Code Analyzer32 d32 c32 z324008001200160020001833196618201784

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Index32 d32 c32 z32100020003000400050004602458045894613

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Engine32 d32 c32 z32300600900120015001433137914251402

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Avrora AVR Simulation Framework32 d32 c32 z32120024003600480060005572556154415613

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: BioJava Biological Data Framework32 d32 c32 z322K4K6K8K10K7907790478587874

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Zxing 1D/2D Barcode Image Processing32 d32 c32 z32130260390520650599569599609

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2O In-Memory Platform For Machine Learning32 d32 c32 z3290018002700360045003755397938683974

CacheBench

This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read32 d32 c32 z32160032004800640080007615.837615.957616.337616.09MIN: 7615.4 / MAX: 7616.44MIN: 7615.46 / MAX: 7616.35MIN: 7615.95 / MAX: 7616.74MIN: 7615.65 / MAX: 7616.541. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Write32 d32 c32 z3210K20K30K40K50K45643.0445645.0945646.8245646.09MIN: 45482.26 / MAX: 45696.12MIN: 45483.02 / MAX: 45696.19MIN: 45482.27 / MAX: 45698.03MIN: 45484.29 / MAX: 45698.111. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / Write32 d32 c32 z3220K40K60K80K100K87854.1287238.0187218.2187227.59MIN: 72077.93 / MAX: 90708.03MIN: 65732.92 / MAX: 90706.91MIN: 65721.62 / MAX: 90703.93MIN: 65739.52 / MAX: 90694.351. (CC) gcc options: -O3 -lrt

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crown32 d32 c32 z3291827364536.2835.9137.2536.96MIN: 35.88 / MAX: 37.13MIN: 35.53 / MAX: 37.08MIN: 36.89 / MAX: 37.75MIN: 36.61 / MAX: 37.43

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crown32 d32 c32 z3291827364536.9437.0037.6837.30MIN: 36.46 / MAX: 37.76MIN: 36.53 / MAX: 38.11MIN: 37.25 / MAX: 38.37MIN: 36.86 / MAX: 38.04

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon32 d32 c32 z32102030405041.5641.5741.8241.60MIN: 41.33 / MAX: 41.84MIN: 41.37 / MAX: 41.9MIN: 41.6 / MAX: 42.16MIN: 41.36 / MAX: 41.86

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Obj32 d32 c32 z3291827364537.4137.4436.8637.28MIN: 37.22 / MAX: 37.69MIN: 37.24 / MAX: 37.71MIN: 36.67 / MAX: 37.11MIN: 37.09 / MAX: 37.7

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon32 d32 c32 z32112233445545.6545.4646.3145.94MIN: 45.37 / MAX: 46.89MIN: 45.22 / MAX: 46.6MIN: 46.05 / MAX: 46.74MIN: 45.66 / MAX: 46.38

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Obj32 d32 c32 z3291827364539.1439.0039.1138.94MIN: 38.92 / MAX: 39.84MIN: 38.78 / MAX: 39.64MIN: 38.88 / MAX: 39.43MIN: 38.69 / MAX: 39.29

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4K32 d32 c32 z321.34482.68964.03445.37926.7245.9775.8295.8995.8011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4K32 d32 c32 z32132639526558.6447.2558.7248.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4K32 d32 c32 z324080120160200186.37180.96185.56186.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4K32 d32 c32 z324080120160200184.10183.90184.98185.671. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating32 d32 c32 z3250K100K150K200K250K2411912402872423992415451. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating32 d32 c32 z3250K100K150K200K250K2113832118152115842122091. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To Compile32 d32 c32 z3261218243024.3024.4523.7623.56

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To Compile32 d32 c32 z3260120180240300258.93258.31272.61254.01

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig32 d32 c32 z32122436486053.6353.6252.0152.13

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig32 d32 c32 z32100200300400500452.61453.69434.19433.79

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3280016002400320040003499349334063404

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3280016002400320040003522351534463451

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3290018002700360045004132415740484049

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3214K28K42K56K70K63336628026143060673

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3230K60K90K120K150K118802118221115669116377

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3214K28K42K56K70K62787634026211361987

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3230K60K90K120K150K119783118980116972116566

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3216K32K48K64K80K73329730247149571361

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 d32 c32 z3230K60K90K120K150K139445139685136312136464

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-5032 d32 c32 z32122436486053.3053.0052.7852.44MIN: 50.97 / MAX: 53.84MIN: 50.62 / MAX: 53.51MIN: 17.43 / MAX: 53.32MIN: 15.02 / MAX: 53.14

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-15232 d32 c32 z3251015202518.8618.8618.9219.04MIN: 7.91 / MAX: 19.03MIN: 10.78 / MAX: 19.02MIN: 7.59 / MAX: 19.04MIN: 6.89 / MAX: 19.18

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-5032 d32 c32 z3291827364540.3140.3239.9640.19MIN: 15.27 / MAX: 40.73MIN: 15.51 / MAX: 40.87MIN: 15.13 / MAX: 40.53MIN: 15.55 / MAX: 40.67

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-15232 d32 c32 z324812162015.3515.3215.5115.61MIN: 8.86 / MAX: 15.52MIN: 6.91 / MAX: 15.45MIN: 7.3 / MAX: 15.63MIN: 6.89 / MAX: 15.74

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l32 d32 c32 z32369121510.2110.049.829.85MIN: 5.69 / MAX: 10.32MIN: 5.86 / MAX: 10.23MIN: 5.63 / MAX: 10.05MIN: 5.1 / MAX: 9.99

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l32 d32 c32 z322468107.157.187.117.17MIN: 4.34 / MAX: 7.3MIN: 4.37 / MAX: 7.37MIN: 4.25 / MAX: 7.26MIN: 4.45 / MAX: 7.33

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-1632 d32 c32 z3236912159.759.779.759.73

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNet32 d32 c32 z3281624324033.0233.1431.9232.12

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-1632 d32 c32 z3261218243024.5124.4725.2025.15

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNet32 d32 c32 z3260120180240300276.19274.97274.97272.93

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNet32 d32 c32 z3271421283528.7927.7328.7128.99

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-5032 d32 c32 z322468108.598.618.778.74

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNet32 d32 c32 z324080120160200158.08157.60155.77158.47

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-5032 d32 c32 z32122436486051.4951.5651.5751.34

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream32 d32 c32 z3251015202521.0920.8721.2721.29

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream32 d32 c32 z32160320480640800751.21753.12745.18747.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 d32 c32 z322004006008001000815.98816.28835.26836.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 d32 c32 z3251015202519.5919.5819.1319.11

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream32 d32 c32 z3260120180240300266.53266.84266.98266.86

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream32 d32 c32 z32132639526559.9759.9059.8759.86

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 d32 c32 z3250010001500200025002189.072195.922199.492208.15

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 d32 c32 z322468107.28967.27387.26107.2332

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream32 d32 c32 z32306090120150121.80122.33122.96123.02

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream32 d32 c32 z32306090120150130.79130.48129.80129.87

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream32 d32 c32 z3261218243025.7925.8226.0826.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream32 d32 c32 z32130260390520650611.44611.60608.13607.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream32 d32 c32 z3260120180240300266.28266.03267.84266.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream32 d32 c32 z32132639526560.0360.0659.6759.88

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 d32 c32 z32306090120150122.93123.15123.79123.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 d32 c32 z32306090120150129.81129.54128.85128.82

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream32 d32 c32 z324080120160200181.12181.10182.76182.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream32 d32 c32 z322040608010088.2388.2087.3387.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream32 d32 c32 z3291827364538.8338.7739.9540.17

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream32 d32 c32 z3290180270360450410.33411.34397.96396.29

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 d32 c32 z3280160240320400381.78384.32385.65383.97

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 d32 c32 z32102030405041.8441.5941.4441.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream32 d32 c32 z3251015202521.0721.0421.2921.23

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream32 d32 c32 z32160320480640800750.40751.93746.13747.31

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-Only32 d32 c32 z32112233445547.4147.5244.4844.73

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-Only32 d32 c32 z32306090120150119.57119.72112.09112.03

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-Only32 d32 c32 z32132639526559.7959.5855.5455.65

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-Only32 d32 c32 z3290180270360450426.37426.30410.43410.61

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-Only32 d32 c32 z32306090120150148.56148.74138.60139.09

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU32 d32 c32 z324812162016.5416.5117.1817.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU32 d32 c32 z322004006008001000965.35964.20927.57929.23MIN: 922.7 / MAX: 1047.5MIN: 905.78 / MAX: 1053.38MIN: 895.6 / MAX: 1019.94MIN: 907.01 / MAX: 1013.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU32 d32 c32 z32306090120150151.25150.07150.06151.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU32 d32 c32 z3220406080100105.64106.43106.44105.48MIN: 54.2 / MAX: 154.42MIN: 80.87 / MAX: 199.77MIN: 81.71 / MAX: 196.1MIN: 82.05 / MAX: 167.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU32 d32 c32 z32306090120150150.25150.84150.37150.801. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU32 d32 c32 z3220406080100106.32105.91106.24105.97MIN: 81.37 / MAX: 177.41MIN: 82.12 / MAX: 188.16MIN: 81.06 / MAX: 185.99MIN: 81.88 / MAX: 218.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU32 d32 c32 z32300600900120015001166.831166.561197.461190.421. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU32 d32 c32 z324812162013.6513.6513.2913.36MIN: 6.73 / MAX: 75.18MIN: 9.08 / MAX: 67.03MIN: 8.3 / MAX: 73.59MIN: 7.26 / MAX: 78.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU32 d32 c32 z3281624324031.2031.2232.8132.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU32 d32 c32 z32110220330440550510.90510.79486.03486.65MIN: 470.7 / MAX: 595.97MIN: 473.86 / MAX: 584.54MIN: 454.31 / MAX: 580.9MIN: 465.68 / MAX: 570.731. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU32 d32 c32 z3280016002400320040003869.703877.913924.863921.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU32 d32 c32 z320.90681.81362.72043.62724.5344.034.033.903.91MIN: 2.23 / MAX: 62.26MIN: 2.23 / MAX: 54.09MIN: 2.18 / MAX: 64.81MIN: 2.2 / MAX: 72.731. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU32 d32 c32 z32130260390520650553.65554.68579.41576.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU32 d32 c32 z3271421283528.8228.7727.5327.69MIN: 19.39 / MAX: 99.16MIN: 17.12 / MAX: 135.79MIN: 18.86 / MAX: 82.58MIN: 18.56 / MAX: 147.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU32 d32 c32 z324008001200160020001862.241860.991964.991960.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU32 d32 c32 z322468108.528.528.058.07MIN: 4.8 / MAX: 75.53MIN: 4.97 / MAX: 67.6MIN: 4.56 / MAX: 76.48MIN: 4.55 / MAX: 69.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU32 d32 c32 z324008001200160020001628.911627.931704.021704.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU32 d32 c32 z3251015202519.5619.5818.6918.69MIN: 13.73 / MAX: 73.6MIN: 10.24 / MAX: 83.63MIN: 9.78 / MAX: 86.93MIN: 9.97 / MAX: 81.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU32 d32 c32 z32120024003600480060005423.135416.315751.585747.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU32 d32 c32 z321.30052.6013.90155.2026.50255.785.785.425.41MIN: 3.37 / MAX: 65.27MIN: 3.21 / MAX: 58.78MIN: 3.15 / MAX: 67.23MIN: 3.17 / MAX: 57.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU32 d32 c32 z32140280420560700634.50632.92666.30666.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU32 d32 c32 z3261218243025.1625.2223.9523.95MIN: 19.24 / MAX: 86.7MIN: 21.61 / MAX: 89.16MIN: 15.19 / MAX: 90.71MIN: 13.94 / MAX: 114.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU32 d32 c32 z324080120160200195.05194.21201.15199.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU32 d32 c32 z322040608010081.8782.1879.3979.82MIN: 52.13 / MAX: 175.84MIN: 58.39 / MAX: 175.7MIN: 43.97 / MAX: 186.13MIN: 42.02 / MAX: 179.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU32 d32 c32 z3270014002100280035003100.953099.203299.933300.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU32 d32 c32 z32369121510.2110.229.569.56MIN: 5.17 / MAX: 61.15MIN: 5.48 / MAX: 68.07MIN: 5.09 / MAX: 75.37MIN: 5.1 / MAX: 77.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU32 d32 c32 z324008001200160020001696.501694.011735.641741.571. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU32 d32 c32 z3236912159.379.399.169.12MIN: 6.07 / MAX: 71.06MIN: 5.95 / MAX: 68.66MIN: 5.99 / MAX: 67.91MIN: 6.22 / MAX: 56.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU32 d32 c32 z322004006008001000848.62853.38896.69898.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU32 d32 c32 z3291827364537.6137.4035.5935.51MIN: 24.11 / MAX: 127.49MIN: 27.33 / MAX: 92.33MIN: 24.72 / MAX: 147.24MIN: 22.8 / MAX: 100.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU32 d32 c32 z329K18K27K36K45K39843.0539562.8740101.8040123.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU32 d32 c32 z320.15080.30160.45240.60320.7540.670.670.660.65MIN: 0.36 / MAX: 50.74MIN: 0.36 / MAX: 62.87MIN: 0.36 / MAX: 65.79MIN: 0.36 / MAX: 51.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU32 d32 c32 z32160320480640800690.24692.02730.82745.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU32 d32 c32 z32102030405046.2846.1743.7142.87MIN: 30.15 / MAX: 108.49MIN: 39.81 / MAX: 161.92MIN: 35.06 / MAX: 153.84MIN: 35.14 / MAX: 107.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU32 d32 c32 z3211K22K33K44K55K52344.6052382.3152475.3952441.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU32 d32 c32 z320.1080.2160.3240.4320.540.480.480.470.48MIN: 0.27 / MAX: 65.55MIN: 0.27 / MAX: 50.17MIN: 0.27 / MAX: 64.47MIN: 0.27 / MAX: 50.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read32 d32 c32 z3240M80M120M160M200M1607078121606653051771676361767704681. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random32 d32 c32 z32140K280K420K560K700K6304786336886362426305751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing32 d32 c32 z32900K1800K2700K3600K4500K42444784419497436499642846911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random32 d32 c32 z32500K1000K1500K2000K2500K23515682327800236127023736541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Speedb is a next-generation key value storage engine that is RocksDB compatible and aiming for stability, efficiency, and performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Read32 d32 c32 z3240M80M120M160M200M1635124321632027211794349241796859541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Random32 d32 c32 z3270K140K210K280K350K3136833177583141143141231. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writing32 d32 c32 z321.7M3.4M5.1M6.8M8.5M71056027746346721023574576001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Random32 d32 c32 z32500K1000K1500K2000K2500K22158962229494225934422314031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.gguf32 d32 c32 z3271421283529.8529.7429.9029.751. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.gguf32 d32 c32 z324812162018.0817.8717.8717.941. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.gguf32 d32 c32 z320.76951.5392.30853.0783.84753.423.423.413.421. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

159 Results Shown

Quicksilver:
  CORAL2 P1
  CORAL2 P2
  CTS2
Y-Cruncher:
  500M
  1B
Meta Performance Per Watts:
  Performance Per Watts
  Phoronix Test Suite System Monitoring
QuantLib
OpenFOAM:
  drivaerFastback, Small Mesh Size - Mesh Time
  drivaerFastback, Small Mesh Size - Execution Time
FFmpeg:
  libx265 - Live
  libx265 - Upload
  libx265 - Platform
  libx265 - Video On Demand
Xmrig:
  KawPow - 1M
  Monero - 1M
  Wownero - 1M
  GhostRider - 1M
  CryptoNight-Heavy - 1M
  CryptoNight-Femto UPX2 - 1M
DaCapo Benchmark:
  Jython
  Eclipse
  GraphChi
  Tradesoap
  Tradebeans
  Spring Boot
  Apache Kafka
  Apache Tomcat
  jMonkeyEngine
  Apache Cassandra
  Apache Xalan XSLT
  Batik SVG Toolkit
  H2 Database Engine
  FOP Print Formatter
  PMD Source Code Analyzer
  Apache Lucene Search Index
  Apache Lucene Search Engine
  Avrora AVR Simulation Framework
  BioJava Biological Data Framework
  Zxing 1D/2D Barcode Image Processing
  H2O In-Memory Platform For Machine Learning
CacheBench:
  Read
  Write
  Read / Modify / Write
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
7-Zip Compression:
  Compression Rating
  Decompression Rating
Timed FFmpeg Compilation
Timed Gem5 Compilation
Timed Linux Kernel Compilation:
  defconfig
  allmodconfig
OSPRay Studio:
  1 - 4K - 1 - Path Tracer - CPU
  2 - 4K - 1 - Path Tracer - CPU
  3 - 4K - 1 - Path Tracer - CPU
  1 - 4K - 16 - Path Tracer - CPU
  1 - 4K - 32 - Path Tracer - CPU
  2 - 4K - 16 - Path Tracer - CPU
  2 - 4K - 32 - Path Tracer - CPU
  3 - 4K - 16 - Path Tracer - CPU
  3 - 4K - 32 - Path Tracer - CPU
PyTorch:
  CPU - 1 - ResNet-50
  CPU - 1 - ResNet-152
  CPU - 16 - ResNet-50
  CPU - 16 - ResNet-152
  CPU - 1 - Efficientnet_v2_l
  CPU - 16 - Efficientnet_v2_l
TensorFlow:
  CPU - 1 - VGG-16
  CPU - 1 - AlexNet
  CPU - 16 - VGG-16
  CPU - 16 - AlexNet
  CPU - 1 - GoogLeNet
  CPU - 1 - ResNet-50
  CPU - 16 - GoogLeNet
  CPU - 16 - ResNet-50
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
OpenVINO:
  Face Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP32 - CPU:
    FPS
    ms
  Vehicle Detection FP16 - CPU:
    FPS
    ms
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
  Face Detection Retail FP16 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16 - CPU:
    FPS
    ms
  Vehicle Detection FP16-INT8 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
  Face Detection Retail FP16-INT8 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16-INT8 - CPU:
    FPS
    ms
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16-INT8 - CPU:
    FPS
    ms
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
  Handwritten English Recognition FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16 - CPU:
    FPS
    ms
  Handwritten English Recognition FP16-INT8 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
    ms
RocksDB:
  Rand Read
  Update Rand
  Read While Writing
  Read Rand Write Rand
Speedb:
  Rand Read
  Update Rand
  Read While Writing
  Read Rand Write Rand
Llama.cpp:
  llama-2-7b.Q4_0.gguf
  llama-2-13b.Q4_0.gguf
  llama-2-70b-chat.Q5_0.gguf