3200u april

AMD Ryzen 3 3200U testing with a MOTILE PF4PU1F (N.1.03 BIOS) and AMD Radeon Vega 3 512MB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204013-NE-3200UAPRI16
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
April 01 2022
  40 Minutes
B
April 01 2022
  2 Hours, 38 Minutes
C
April 01 2022
  2 Hours, 48 Minutes
Invert Behavior (Only Show Selected Data)
  2 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3200u aprilOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 3 3200U @ 2.60GHz (2 Cores / 4 Threads)MOTILE PF4PU1F (N.1.03 BIOS)AMD Raven/Raven23584MB128GB BIWIN SSDAMD Radeon Vega 3 512MB (1200/1200MHz)AMD Raven/Raven2/FenghuangRealtek RTL8111/8168/8411 + Intel Dual Band-AC 3168NGWUbuntu 20.045.15.0-051500-generic (x86_64)GNOME Shell 3.36.9X Server 1.20.134.6 Mesa 22.0.0-devel (git-9cb9101 2022-01-08 focal-oibaf-ppa) (LLVM 13.0.0 DRM 3.42)GCC 9.4.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution3200u April BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8108102- OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)- Python 3.8.10- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCResult OverviewPhoronix Test Suite100%110%121%131%fast-clispeedtest-cliJava JMHoneDNNperf-bench

3200u aprilperf-bench: Syscall Basiconednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUjava-jmh: Throughputonednn: IP Shapes 1D - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUperf-bench: Memset 1MBonednn: Recurrent Neural Network Training - u8s8f32 - CPUperf-bench: Sched Pipeonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUperf-bench: Futex Hashonednn: Deconvolution Batch shapes_3d - f32 - CPUperf-bench: Memcpy 1MBonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUperf-bench: Futex Lock-Pionednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUperf-bench: Epoll Waitspeedtest-cli: Internet Latencyspeedtest-cli: Internet Upload Speedspeedtest-cli: Internet Download Speedfast-cli: Internet Loaded Latency (Bufferbloat)fast-cli: Internet Latencyfast-cli: Internet Upload Speedfast-cli: Internet Download SpeedABC1245725937.059538516.62843596070.7836.865720185.145.84081738532.421898720075.52022538761.848.2991447994864.396514.93741349.060268.548855.87316.44876.68978372528.545916.845113.343249528942.5415.8970.1589254.9801317106639.054239570.92953181899.57638.173220641.744.37497039394.221396420542.520561.239175.749.0765437385365.647414.63208449.557769.817356.817016.64786.75147374728.774516.834313.332149397129.9325.9271.72188254.8701325629238.742740176.82961014078.7438.386520988.044.09918139940.821136620661.920752.739745.049.4974440581365.957614.81864250.033769.755756.885716.68316.73920375528.629516.944413.376849372244.0905.6849.96234254.853OpenBenchmarking.org

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall BasicCBA3M6M9M12M15MSE +/- 95015.37, N = 3SE +/- 84928.86, N = 31325629213171066124572591. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUACB918273645SE +/- 0.47, N = 3SE +/- 0.39, N = 337.0638.7439.05MIN: 32.44MIN: 32.14MIN: 32.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABC9K18K27K36K45KSE +/- 251.58, N = 3SE +/- 53.42, N = 338516.639570.940176.8MIN: 38386.3MIN: 38855.2MIN: 39914.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputCBA600M1200M1800M2400M3000M2961014078.742953181899.582843596070.78

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABC918273645SE +/- 0.61, N = 3SE +/- 0.58, N = 336.8738.1738.39MIN: 34.13MIN: 33.92MIN: 34.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC4K8K12K16K20KSE +/- 45.51, N = 3SE +/- 25.94, N = 320185.120641.720988.0MIN: 20078.8MIN: 20454.6MIN: 20793.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBABC1020304050SE +/- 0.41, N = 10SE +/- 0.38, N = 1245.8444.3744.101. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC9K18K27K36K45KSE +/- 199.84, N = 3SE +/- 265.92, N = 338532.439394.239940.8MIN: 38387MIN: 39008.4MIN: 39404.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched PipeABC50K100K150K200K250KSE +/- 3038.65, N = 3SE +/- 3146.65, N = 42189872139642113661. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABC4K8K12K16K20KSE +/- 9.59, N = 3SE +/- 32.94, N = 320075.520542.520661.9MIN: 19976.5MIN: 20405.1MIN: 20509.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABC4K8K12K16K20KSE +/- 77.64, N = 3SE +/- 76.03, N = 320225.020561.220752.7MIN: 20127.6MIN: 20284.8MIN: 204851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABC9K18K27K36K45KSE +/- 210.62, N = 3SE +/- 132.17, N = 338761.839175.739745.0MIN: 38608.8MIN: 38587.5MIN: 39411.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC1122334455SE +/- 0.27, N = 3SE +/- 0.30, N = 348.3049.0849.50MIN: 46.76MIN: 47.39MIN: 47.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashACB1000K2000K3000K4000K5000KSE +/- 25564.63, N = 3SE +/- 18272.82, N = 34479948440581343738531. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC1530456075SE +/- 0.05, N = 3SE +/- 0.07, N = 364.4065.6565.96MIN: 61.73MIN: 63.07MIN: 63.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBACB48121620SE +/- 0.05, N = 3SE +/- 0.12, N = 314.9414.8214.631. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABC1122334455SE +/- 0.05, N = 3SE +/- 0.25, N = 349.0649.5650.03MIN: 47.61MIN: 46.99MIN: 45.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUACB1632486480SE +/- 0.32, N = 3SE +/- 0.27, N = 368.5569.7669.82MIN: 68.19MIN: 68.52MIN: 68.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC1326395265SE +/- 0.20, N = 3SE +/- 0.36, N = 355.8756.8256.89MIN: 51.91MIN: 51.77MIN: 52.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC48121620SE +/- 0.02, N = 3SE +/- 0.01, N = 316.4516.6516.68MIN: 15.49MIN: 15.56MIN: 15.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUACB246810SE +/- 0.03112, N = 3SE +/- 0.02701, N = 36.689786.739206.75147MIN: 6.21MIN: 6.12MIN: 6.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiCBA8001600240032004000SE +/- 4.04, N = 3SE +/- 11.02, N = 33755374737251. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUACB714212835SE +/- 0.08, N = 3SE +/- 0.06, N = 328.5528.6328.77MIN: 26.62MIN: 26.65MIN: 26.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUBAC48121620SE +/- 0.03, N = 3SE +/- 0.05, N = 316.8316.8516.94MIN: 16.67MIN: 16.75MIN: 16.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUBAC3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 313.3313.3413.38MIN: 11.95MIN: 11.94MIN: 12.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitABC110K220K330K440K550KSE +/- 1986.39, N = 3SE +/- 1579.52, N = 34952894939714937221. (CC) gcc options: -O6 -ggdb3 -funwind-tables -std=gnu99 -lunwind-x86_64 -lunwind -llzma -Xlinker -lpthread -lrt -lm -ldl -lelf -lcrypto -lslang -lz -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

speedtest-cli

This test profile uses the open-source speedtest-cli client to benchmark your Internet connection's upload/download performance and latency against the Speedtest.net servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betterspeedtest-cli 2.1.3Internet LatencyBAC1020304050SE +/- 0.79, N = 15SE +/- 2.30, N = 1429.9342.5444.09

OpenBenchmarking.orgMbit/s, More Is Betterspeedtest-cli 2.1.3Internet Upload SpeedBAC1.3322.6643.9965.3286.66SE +/- 0.08, N = 15SE +/- 0.09, N = 145.925.895.68

OpenBenchmarking.orgMbit/s, More Is Betterspeedtest-cli 2.1.3Internet Download SpeedBAC1632486480SE +/- 1.89, N = 15SE +/- 1.56, N = 1471.7270.1549.96

fast-cli

This test profile uses the open-source fast-cli client to benchmark your Internet connection's upload/download performance and latency against Netflix's fast.com service. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is Betterfast-cliInternet Loaded Latency (Bufferbloat)ABC50100150200250SE +/- 5.46, N = 15SE +/- 20.74, N = 1589188234

OpenBenchmarking.orgms, Fewer Is Betterfast-cliInternet LatencyABC612182430SE +/- 0.54, N = 15SE +/- 0.63, N = 15252525

OpenBenchmarking.orgMbit/s, More Is Betterfast-cliInternet Upload SpeedACB1.10252.2053.30754.415.5125SE +/- 0.14, N = 15SE +/- 0.13, N = 154.94.84.8

OpenBenchmarking.orgMbit/s, More Is Betterfast-cliInternet Download SpeedABC20406080100SE +/- 1.56, N = 15SE +/- 1.87, N = 15807053