xeon gold april

2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204205-NE-XEONGOLDA77
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
April 20 2022
  53 Minutes
B
April 20 2022
  53 Minutes
C
April 20 2022
  53 Minutes
D
April 20 2022
  24 Minutes
Invert Behavior (Only Show Selected Data)
  45 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xeon gold aprilOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads)TYAN S7106 (V2.01.B40 BIOS)Intel Sky Lake-E DMI3 Registers94GB500GB Samsung SSD 860ASPEEDVE2282 x Intel I210 + 2 x QLogic cLOM8214 1/10GbEUbuntu 20.045.9.0-050900rc6-generic (x86_64) 20200920GNOME Shell 3.36.4X Server 1.20.13GCC 9.4.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionXeon Gold April BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102 - OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

ABCDResult OverviewPhoronix Test Suite100%126%153%179%206%EthrEthrEthrEthrEthrEthrEthrEthrperf-benchEthrEthrperf-benchEthrEthrEthrEthrEthrperf-benchEthrEthrEthrEthrEthrEthrEthrEthrEthrEthrEthrEthrEthrEthrperf-benchEthrEthrEthrperf-benchEthrEthrTCP - Connections/s - 16TCP - Connections/s - 32TCP - Latency - 4TCP - Latency - 64TCP - Bandwidth - 1TCP - Latency - 2TCP - Bandwidth - 2UDP - Bandwidth - 4Epoll WaitTCP - Latency - 16TCP - Bandwidth - 32Memset 1MBTCP - Latency - 1UDP - Bandwidth - 16UDP - Bandwidth - 16UDP - Bandwidth - 8UDP - Bandwidth - 8Memcpy 1MBUDP - Bandwidth - 2UDP - Bandwidth - 2UDP - Bandwidth - 1TCP - Bandwidth - 16UDP - Bandwidth - 32UDP - Bandwidth - 4UDP - Bandwidth - 64TCP - Bandwidth - 64UDP - Bandwidth - 64TCP - Latency - 32TCP - Bandwidth - 8TCP - Latency - 8TCP - Bandwidth - 4UDP - Bandwidth - 32Sched PipeTCP - Connections/s - 8TCP - Connections/s - 64TCP - Connections/s - 4Futex HashTCP - Connections/s - 2TCP - Connections/s - 1

xeon gold aprilonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUethr: TCP - Connections/s - 16ethr: TCP - Connections/s - 32ethr: TCP - Latency - 4ethr: TCP - Latency - 64ethr: TCP - Bandwidth - 1ethr: TCP - Latency - 2ethr: TCP - Bandwidth - 2ethr: UDP - Bandwidth - 4perf-bench: Epoll Waitethr: TCP - Latency - 16onednn: IP Shapes 3D - bf16bf16bf16 - CPUethr: TCP - Bandwidth - 32perf-bench: Memset 1MBethr: TCP - Latency - 1onednn: Recurrent Neural Network Inference - u8s8f32 - CPUethr: UDP - Bandwidth - 16ethr: UDP - Bandwidth - 16onednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUethr: UDP - Bandwidth - 8ethr: UDP - Bandwidth - 8perf-bench: Memcpy 1MBonednn: Recurrent Neural Network Training - u8s8f32 - CPUethr: UDP - Bandwidth - 2ethr: UDP - Bandwidth - 2ethr: UDP - Bandwidth - 1avifenc: 6avifenc: 6, Losslessethr: TCP - Bandwidth - 16onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUperf-bench: Futex Lock-Piethr: UDP - Bandwidth - 32ethr: UDP - Bandwidth - 4onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUethr: UDP - Bandwidth - 64onednn: IP Shapes 3D - u8s8f32 - CPUethr: TCP - Bandwidth - 64ethr: UDP - Bandwidth - 64ethr: TCP - Latency - 32ethr: TCP - Bandwidth - 8ethr: TCP - Latency - 8avifenc: 0avifenc: 10, Losslessavifenc: 2influxdb: 64 - 10000 - 2,5000,1 - 10000ethr: TCP - Bandwidth - 4ethr: UDP - Bandwidth - 32onednn: IP Shapes 3D - f32 - CPUperf-bench: Sched Pipeonednn: Recurrent Neural Network Training - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUinfluxdb: 1024 - 10000 - 2,5000,1 - 10000onednn: IP Shapes 1D - bf16bf16bf16 - CPUinfluxdb: 4 - 10000 - 2,5000,1 - 10000perf-bench: Syscall Basiconednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUjava-jmh: Throughputethr: TCP - Connections/s - 8ethr: TCP - Connections/s - 64onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUethr: TCP - Connections/s - 4onednn: Deconvolution Batch shapes_1d - f32 - CPUperf-bench: Futex Hashethr: TCP - Connections/s - 2ethr: TCP - Connections/s - 1ABCD0.2398591.770241011208332.21643.02923.3841.76523.511291.93770841.3474.0517615.2456.89458237.54779.526217200032.86865.5191.2713940.16139840016.8049521377.5137814132.7519.756.48910.89921.581391.2511120.718126672.6034129964001.228188.4911.9141.49525.3440.252113.4237.48858.8971040989.925.8727628003.72951801541396.337.539741079356.25.71974783450.1167176382.781366.955260.4714798.74834785.4390.555970.69116453387527398.055101210169.556236.36114101011.08332825232101010100.2399111.766891013101242.04840.85817.7241.80324.941601.86635041.7023.4071413.0359.80317339.873890.068226720033.88781.3271.3635540.97142560018.2925521367.0336871931.8618.886.76610.91420.731378.8810721.188412252.5348929400001.232698.6711.7142.38924.9341.127112.787.39759.1481028876.425.6327556003.766781814161404.647.464011071590.15.67367786195.2167752642.79746.95150.4722388.73941781.4170.5567110.69117553514068363.933101110149.538416.3588101011.07842824913101010107.478444.158031010165641.87741.89223.141.36120.671426.07755642.9143.8194415.0952.0748941.075788.892203560030.73779.1381.4102837.47130840016.7947361473.7738706733.2719.86.52411.34820.931434.5510721.478203982.5202729508001.196418.6511.7341.20224.7341.091110.9167.34360102195525.8428024003.704281831851382.237.4707110820505.68132780902.8168286872.793496.917110.4697798.78465781.50.5585460.69376953315949803.277101010149.543486.36553101011.08152824285101010102080101243.08732.25823.1432.23625.741377.4631435.97114.5553.01996343.111200320030.2540.6142800017.04766136678431.6619.6821.4621.0882026329076008.7411.5742.01825.1641.20326.092773600182091101010151011282478310101010OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABC2468100.2398590.2399117.478440MIN: 0.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABC0.93561.87122.80683.74244.6781.770241.766894.15803MIN: 1.69MIN: 1.68MIN: 1.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Ethr

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 16ABCD4008001200160020001011101310102080MIN: 1010MIN: 1010 / MAX: 1020MIN: 1010

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 32ABCD4008001200160020002083101216561012MIN: 1010MIN: 1010 / MAX: 1020MIN: 1010MIN: 1010 / MAX: 1020

OpenBenchmarking.orgus, Fewer Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 4ABCD102030405032.2242.0541.8843.09MIN: 28.6 / MAX: 33.81MIN: 35.62 / MAX: 50.47MIN: 37.48 / MAX: 49.72MIN: 38.43 / MAX: 49.14

OpenBenchmarking.orgus, Fewer Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 64ABCD102030405043.0340.8641.8932.26MIN: 33.02 / MAX: 50.73MIN: 32.67 / MAX: 45.14MIN: 31.68 / MAX: 45.62MIN: 28.92 / MAX: 47.21

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 1ABCD61218243023.3817.7223.1023.14MIN: 21.34 / MAX: 25.05MIN: 14.68 / MAX: 22.62MIN: 21.11 / MAX: 24.02MIN: 22.13 / MAX: 24.35

OpenBenchmarking.orgus, Fewer Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 2ABCD102030405041.7741.8041.3632.24MIN: 34.05 / MAX: 52.7MIN: 37.72 / MAX: 49.34MIN: 37.14 / MAX: 46.94MIN: 29.44 / MAX: 40.51

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 2ABCD61218243023.5124.9420.6725.74MIN: 14.62 / MAX: 43.04MIN: 16.75 / MAX: 43.96MIN: 13.75 / MAX: 37.78MIN: 16.46 / MAX: 43.62

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 4ABCD300600900120015001291.931601.861426.071377.40MIN: 23.68MIN: 24.41MIN: 24.04MIN: 24.11

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitABCD1700340051006800850077086350755663141. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

Ethr

OpenBenchmarking.orgus, Fewer Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 16ABCD102030405041.3541.7042.9135.97MIN: 36.81 / MAX: 44.28MIN: 32.41 / MAX: 45.69MIN: 30.22 / MAX: 51.22MIN: 29.59 / MAX: 45.4

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUABC0.91161.82322.73483.64644.5584.051763.407143.81944MIN: 2.94MIN: 2.86MIN: 2.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Ethr

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 32ABCD4812162015.2413.0315.0914.55MIN: 5.94 / MAX: 261.6MIN: 5.14 / MAX: 243.16MIN: 4.73 / MAX: 269.41MIN: 4.94 / MAX: 249.01

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBABCD132639526556.8959.8052.0753.021. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

Ethr

OpenBenchmarking.orgus, Fewer Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 1ABCD102030405037.5439.8741.0843.11MIN: 29.88 / MAX: 48.56MIN: 31.88 / MAX: 47.22MIN: 32.43 / MAX: 50.94MIN: 38.3 / MAX: 49.42

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABC2004006008001000779.53890.07788.89MIN: 769.69MIN: 769.73MIN: 765.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Ethr

OpenBenchmarking.orgPackets/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 16ABCD500K1000K1500K2000K2500K2172000226720020356002003200MIN: 1940000 / MAX: 2280000MIN: 2220000 / MAX: 2310000MIN: 1790000 / MAX: 2270000MIN: 1930000 / MAX: 2090000

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 16ABCD81624324032.8633.8830.7330.25MIN: 11.43 / MAX: 291.97MIN: 12.22 / MAX: 295.21MIN: 12.32 / MAX: 290.81MIN: 12.07 / MAX: 268.09

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABC2004006008001000865.52781.33779.14MIN: 766.63MIN: 775.04MIN: 771.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC0.31730.63460.95191.26921.58651.271391.363551.41028MIN: 1.18MIN: 1.27MIN: 1.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Ethr

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 8ABCD91827364540.1640.9737.4740.60MIN: 18.92 / MAX: 191.69MIN: 18.81 / MAX: 192.13MIN: 18.98 / MAX: 175.77MIN: 18.23 / MAX: 199.35

OpenBenchmarking.orgPackets/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 8ABCD300K600K900K1200K1500K1398400142560013084001428000MIN: 1230000 / MAX: 1500000MIN: 1220000 / MAX: 1500000MIN: 1210000 / MAX: 1370000MIN: 1260000 / MAX: 1560000

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBABCD51015202516.8018.2916.7917.051. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC300600900120015001377.511367.031473.77MIN: 1371.47MIN: 1359.37MIN: 1371.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Ethr

OpenBenchmarking.orgPackets/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 2ABCD80K160K240K320K400K378141368719387067366784MIN: 349770 / MAX: 426690MIN: 335020 / MAX: 393720MIN: 338890 / MAX: 426850MIN: 337700 / MAX: 385930

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 2ABCD81624324032.7531.8633.2731.66MIN: 22.31 / MAX: 54.62MIN: 21.41 / MAX: 52.55MIN: 21.65 / MAX: 54.64MIN: 21.42 / MAX: 51.04

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 1ABCD51015202519.7518.8819.8019.68MIN: 18.01 / MAX: 25.67MIN: 17.69 / MAX: 21.31MIN: 17.79 / MAX: 25.36MIN: 17.78 / MAX: 24.79

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6ABC2468106.4896.7666.5241. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessABC369121510.9010.9111.351. (CXX) g++ options: -O3 -fPIC -lm

Ethr

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 16ABCD51015202521.5820.7320.9321.46MIN: 8.82 / MAX: 183.6MIN: 8.03 / MAX: 182.13MIN: 8.39 / MAX: 185.03MIN: 8.37 / MAX: 184.83

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABC300600900120015001391.251378.881434.55MIN: 1375.6MIN: 1370.45MIN: 1353.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiABC204060801001111071071. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

Ethr

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 32ABCD51015202520.7121.1821.4721.08MIN: 5.71 / MAX: 358.92MIN: 6.11 / MAX: 361.13MIN: 5.68 / MAX: 364.15MIN: 5.84 / MAX: 360.82

OpenBenchmarking.orgPackets/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 4ABCD200K400K600K800K1000K812667841225820398820263MIN: 778180 / MAX: 866540MIN: 798610 / MAX: 902140MIN: 778870 / MAX: 857010MIN: 777610 / MAX: 866340

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUABC0.58581.17161.75742.34322.9292.603412.534892.52027MIN: 2.36MIN: 2.3MIN: 2.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Ethr

OpenBenchmarking.orgPackets/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 64ABCD600K1200K1800K2400K3000K2996400294000029508002907600MIN: 2780000 / MAX: 3170000MIN: 2910000 / MAX: 2990000MIN: 2900000 / MAX: 2980000MIN: 2890000 / MAX: 2940000

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABC0.27740.55480.83221.10961.3871.228181.232691.19641MIN: 1.14MIN: 1.15MIN: 1.121. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Ethr

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 64ABCD2468108.498.678.658.74MIN: 1.22 / MAX: 296.78MIN: 2.19 / MAX: 291.04MIN: 1.55 / MAX: 291.46MIN: 2.35 / MAX: 299.41

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 64ABCD369121511.9111.7111.7311.57MIN: 4.84 / MAX: 405.65MIN: 4.19 / MAX: 382.52MIN: 4.43 / MAX: 381.48MIN: 4.84 / MAX: 376.44

OpenBenchmarking.orgus, Fewer Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 32ABCD102030405041.5042.3941.2042.02MIN: 37.34 / MAX: 50.59MIN: 35.95 / MAX: 46.82MIN: 38.11 / MAX: 45.61MIN: 30.7 / MAX: 52.35

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 8ABCD61218243025.3424.9324.7325.16MIN: 12.61 / MAX: 127.4MIN: 12.3 / MAX: 117.07MIN: 11.99 / MAX: 115.22MIN: 12.8 / MAX: 126.04

OpenBenchmarking.orgus, Fewer Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 8ABCD91827364540.2541.1341.0941.20MIN: 31.18 / MAX: 45.88MIN: 31.32 / MAX: 48.35MIN: 32.56 / MAX: 47.82MIN: 31.05 / MAX: 46.46

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0ABC306090120150113.42112.78110.921. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessABC2468107.4887.3977.3431. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2ABC132639526558.9059.1560.001. (CXX) g++ options: -O3 -fPIC -lm

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000ABC200K400K600K800K1000K1040989.91028876.41021955.0

Ethr

OpenBenchmarking.orgGbits/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 4ABCD61218243025.8725.6325.8426.09MIN: 14.13 / MAX: 78.06MIN: 13.97 / MAX: 77.66MIN: 14.08 / MAX: 75.21MIN: 13.23 / MAX: 79.39

OpenBenchmarking.orgPackets/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 32ABCD600K1200K1800K2400K3000K2762800275560028024002773600MIN: 2690000 / MAX: 2800000MIN: 2630000 / MAX: 2820000MIN: 2740000 / MAX: 2840000MIN: 2670000 / MAX: 2820000

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABC0.84751.6952.54253.394.23753.729503.766783.70428MIN: 3.68MIN: 3.72MIN: 3.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched PipeABCD40K80K120K160K200K1801541814161831851820911. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABC300600900120015001396.331404.641382.23MIN: 1377.62MIN: 1353.93MIN: 1361.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC2468107.539747.464017.47071MIN: 7.45MIN: 7.38MIN: 7.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000ABC200K400K600K800K1000K1079356.21071590.11082050.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUABC1.28692.57383.86075.14766.43455.719745.673675.68132MIN: 5.55MIN: 5.54MIN: 5.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000ABC200K400K600K800K1000K783450.1786195.2780902.8

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall BasicABC4M8M12M16M20M1671763816775264168286871. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC0.62941.25881.88822.51763.1472.781362.797402.79349MIN: 2.75MIN: 2.75MIN: 2.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABC2468106.955266.951506.91711MIN: 6.88MIN: 6.84MIN: 6.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC0.10630.21260.31890.42520.53150.4714790.4722380.469779MIN: 0.45MIN: 0.45MIN: 0.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUABC2468108.748348.739418.78465MIN: 8.64MIN: 8.63MIN: 8.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC2004006008001000785.44781.42781.50MIN: 773.81MIN: 775.98MIN: 772.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABC0.12570.25140.37710.50280.62850.5559700.5567110.558546MIN: 0.53MIN: 0.54MIN: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABC0.15610.31220.46830.62440.78050.6911640.6911750.693769MIN: 0.68MIN: 0.68MIN: 0.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputABC11000M22000M33000M44000M55000M53387527398.0653514068363.9353315949803.28

Ethr

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 8ABCD20040060080010001012101110101010MIN: 1010 / MAX: 1020MIN: 1010 / MAX: 1020

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 64ABCD20040060080010001016101410141015MIN: 1010 / MAX: 1020MIN: 1010 / MAX: 1020MIN: 1010 / MAX: 1020MIN: 1010 / MAX: 1020

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUABC36912159.556239.538419.54348MIN: 9.46MIN: 9.46MIN: 9.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUABC2468106.361146.358806.36553MIN: 6.32MIN: 6.32MIN: 6.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Ethr

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 4ABCD20040060080010001010101010101011MIN: 1010

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC369121511.0811.0811.08MIN: 8.7MIN: 7.46MIN: 9.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashABCD600K1200K1800K2400K3000K28252322824913282428528247831. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

Ethr

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 2ABCD20040060080010001010101010101010

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 1.0Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1ABCD20040060080010001010101010101010

74 Results Shown

oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  IP Shapes 1D - f32 - CPU
Ethr:
  TCP - Connections/s - 16
  TCP - Connections/s - 32
  TCP - Latency - 4
  TCP - Latency - 64
  TCP - Bandwidth - 1
  TCP - Latency - 2
  TCP - Bandwidth - 2
  UDP - Bandwidth - 4
perf-bench
Ethr
oneDNN
Ethr
perf-bench
Ethr
oneDNN
Ethr:
  UDP - Bandwidth - 16:
    Packets/sec
    Gbits/sec
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
Ethr:
  UDP - Bandwidth - 8:
    Gbits/sec
    Packets/sec
perf-bench
oneDNN
Ethr:
  UDP - Bandwidth - 2:
    Packets/sec
    Gbits/sec
  UDP - Bandwidth - 1:
    Gbits/sec
libavif avifenc:
  6
  6, Lossless
Ethr
oneDNN
perf-bench
Ethr:
  UDP - Bandwidth - 32
  UDP - Bandwidth - 4
oneDNN
Ethr
oneDNN
Ethr:
  TCP - Bandwidth - 64
  UDP - Bandwidth - 64
  TCP - Latency - 32
  TCP - Bandwidth - 8
  TCP - Latency - 8
libavif avifenc:
  0
  10, Lossless
  2
InfluxDB
Ethr:
  TCP - Bandwidth - 4
  UDP - Bandwidth - 32
oneDNN
perf-bench
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
InfluxDB
oneDNN
InfluxDB
perf-bench
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
Java JMH
Ethr:
  TCP - Connections/s - 8
  TCP - Connections/s - 64
oneDNN:
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
Ethr
oneDNN
perf-bench
Ethr:
  TCP - Connections/s - 2
  TCP - Connections/s - 1