2700 august

AMD Ryzen 7 2700 Eight-Core testing with a Gigabyte AB350N-Gaming WIFI-CF (F20 BIOS) and HIS AMD Radeon HD 6450/7450/8450 / R5 230 OEM 1GB on Ubuntu 19.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2108316-TJ-2700AUGUS79
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 2 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 5 Tests
Creator Workloads 6 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 4 Tests
Multi-Core 7 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 2 Tests
Raytracing 2 Tests
Renderers 3 Tests
Server CPU Tests 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
August 30 2021
  2 Hours, 39 Minutes
2
August 30 2021
  9 Hours, 51 Minutes
3
August 31 2021
  3 Hours, 14 Minutes
4
August 31 2021
  2 Hours, 41 Minutes
Invert Hiding All Results Option
  4 Hours, 36 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2700 augustProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution1234AMD Ryzen 7 2700 Eight-Core @ 3.20GHz (8 Cores / 16 Threads)Gigabyte AB350N-Gaming WIFI-CF (F20 BIOS)AMD 17h16GB120GB ADATA SU700HIS AMD Radeon HD 6450/7450/8450 / R5 230 OEM 1GBAMD Caicos HDMI AudioDELL S2409WRealtek RTL8111/8168/8411 + Intel 3165Ubuntu 19.105.9.0-050900rc7daily20201004-generic (x86_64) 20201003GNOME Shell 3.34.1X Server 1.20.53.3 Mesa 19.2.8 (LLVM 9.0.0)GCC 9.2.1 20191008ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x800820bJava Details- OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-2ubuntu219.10)Python Details- Python 2.7.17 + Python 3.7.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

1234Result OverviewPhoronix Test Suite100%103%106%109%112%RenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceRenaissanceQuantum ESPRESSOI.M.D.SScala DottyA.S.PApache Spark BayesG.A.U.J.FSavina Reactors.IOA.U.C.TRand ForestApache Spark ALSALS Movie LensF.H.RAUSURF112

2700 augustdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitsynthmark: VoiceMark_100keydb: mnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3natron: Spaceshipncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400monnx: yolov4 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUonnx: fcn-resnet101-11 - OpenMP CPUonnx: shufflenet-v2-10 - OpenMP CPUonnx: super-resolution-10 - OpenMP CPUopenvkl: vklBenchmark ISPCopenvkl: vklBenchmark Scalarqe: AUSURF112renaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futurestachyon: Total Timebuild-gcc: Time To Compilebuild-linux-kernel: Time To Compiletnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1yafaray: Total Time For Sample Scene1234374.41116.7339.46255.3585.342444494.043.489746.0211.0015.7495.04864.8992.128.018.927.576.818.0111.763.3123.2373.2919.5514.5338.2841.3631.1417.72053103913615231433181358.681134.7973.310271.22795.62108.19913.64902.13082.74651.714253.12961.2122.30511611.266128.7523657.056326.18774.019269.117193.765373.99116.56338.14253.68587.881444193.613.6307.29546.47911.3435.9245.05066.1072.127.538.807.576.777.7311.382.6521.5274.4918.9514.5639.5641.2631.1217.392053103913784234733181357.091103.9958.110283.22771.32143.89872.74711.13115.75170.414241.72922.8122.65221668.163116.8013651.623324.56273.616269.457194.6251361.041024.5957.010301.32789.32223.59864.54738.23115.35064.014386.63007.9372.16115.87337.46251.08582.897449476.093.8037.36446.51111.4936.0555.13666.0272.127.689.427.636.777.5811.433.2822.8873.0320.1414.5538.5740.5831.1717.572053113913755232433181355.461042.2974.210395.12816.92226.210263.14985.43102.94630.214527.73045.0122.53641656.889134.2473655.989324.38973.298270.877192.901OpenBenchmarking.org

dav1d

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Chimera 1080p12480160240320400SE +/- 0.60, N = 3374.41373.99372.16MIN: 283.67 / MAX: 588.26MIN: 279.97 / MAX: 590.77MIN: 283.06 / MAX: 561.91. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Chimera 1080p12470140210280350Min: 372.8 / Avg: 373.99 / Max: 374.71. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Summer Nature 4K124306090120150SE +/- 0.08, N = 3116.70116.56115.87MIN: 103.05 / MAX: 123.08MIN: 102.19 / MAX: 122.9MIN: 101.07 / MAX: 121.871. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Summer Nature 4K12420406080100Min: 116.44 / Avg: 116.56 / Max: 116.711. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Summer Nature 1080p12470140210280350SE +/- 0.83, N = 3339.46338.14337.46MIN: 273.75 / MAX: 364.24MIN: 281.14 / MAX: 365.6MIN: 267.01 / MAX: 363.691. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Summer Nature 1080p12460120180240300Min: 337.14 / Avg: 338.14 / Max: 339.791. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Chimera 1080p 10-bit12460120180240300SE +/- 0.21, N = 3255.30253.68251.08MIN: 187.72 / MAX: 440.18MIN: 187.17 / MAX: 442.3MIN: 186.85 / MAX: 416.31. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.1Video Input: Chimera 1080p 10-bit12450100150200250Min: 253.25 / Avg: 253.68 / Max: 253.931. (CC) gcc options: -pthread -lm

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100124130260390520650SE +/- 0.91, N = 3585.34587.88582.901. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100124100200300400500Min: 586.74 / Avg: 587.88 / Max: 589.671. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0124100K200K300K400K500KSE +/- 1668.14, N = 3444494.04444193.61449476.091. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.012480K160K240K320K400KMin: 441354.84 / Avg: 444193.61 / Max: 447130.941. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV31240.85571.71142.56713.42284.2785SE +/- 0.053, N = 43.4893.6303.803MIN: 3.45 / MAX: 3.83MIN: 3.47 / MAX: 6.22MIN: 3.72 / MAX: 4.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3124246810Min: 3.49 / Avg: 3.63 / Max: 3.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1124246810SE +/- 0.094, N = 47.0007.2957.364MIN: 6.95 / MAX: 8.07MIN: 7.01 / MAX: 67.92MIN: 7.29 / MAX: 7.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.11243691215Min: 7.1 / Avg: 7.29 / Max: 7.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-501241122334455SE +/- 0.22, N = 446.0246.4846.51MIN: 45.61 / MAX: 70.22MIN: 45.62 / MAX: 118.04MIN: 46.18 / MAX: 58.61. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50124918273645Min: 46.03 / Avg: 46.48 / Max: 47.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.01243691215SE +/- 0.11, N = 411.0011.3411.49MIN: 10.94 / MAX: 11.9MIN: 11.05 / MAX: 25.3MIN: 11.38 / MAX: 14.471. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.01243691215Min: 11.16 / Avg: 11.34 / Max: 11.61. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224124246810SE +/- 0.040, N = 45.7495.9246.055MIN: 5.72 / MAX: 6.81MIN: 5.8 / MAX: 9.65MIN: 6.01 / MAX: 7.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224124246810Min: 5.85 / Avg: 5.92 / Max: 6.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.01241.15562.31123.46684.62245.778SE +/- 0.018, N = 45.0485.0505.136MIN: 4.91 / MAX: 54.4MIN: 4.95 / MAX: 27.22MIN: 5.08 / MAX: 19.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0124246810Min: 5.02 / Avg: 5.05 / Max: 5.11. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v31241530456075SE +/- 0.89, N = 464.9066.1166.03MIN: 64.59 / MAX: 87.6MIN: 64.47 / MAX: 179.62MIN: 65.71 / MAX: 134.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v31241326395265Min: 64.76 / Avg: 66.11 / Max: 68.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship1240.47250.9451.41751.892.3625SE +/- 0.03, N = 32.12.12.1
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship124246810Min: 2.1 / Avg: 2.13 / Max: 2.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet124714212835SE +/- 0.15, N = 328.0127.5327.68MIN: 26.35 / MAX: 106.56MIN: 25.73 / MAX: 29.94MIN: 26.13 / MAX: 75.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet124612182430Min: 27.27 / Avg: 27.53 / Max: 27.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v21243691215SE +/- 0.07, N = 38.928.809.42MIN: 8.02 / MAX: 9.24MIN: 8.06 / MAX: 65.16MIN: 8.86 / MAX: 72.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v21243691215Min: 8.66 / Avg: 8.8 / Max: 8.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3124246810SE +/- 0.13, N = 37.577.577.63MIN: 7.47 / MAX: 8.69MIN: 7.16 / MAX: 62.55MIN: 7.54 / MAX: 9.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v31243691215Min: 7.31 / Avg: 7.57 / Max: 7.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2124246810SE +/- 0.07, N = 36.816.776.77MIN: 6.77 / MAX: 6.93MIN: 6.65 / MAX: 7.82MIN: 6.71 / MAX: 6.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v21243691215Min: 6.69 / Avg: 6.77 / Max: 6.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet124246810SE +/- 0.12, N = 38.017.737.58MIN: 7.8 / MAX: 24.25MIN: 7.33 / MAX: 83.07MIN: 7.46 / MAX: 9.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet1243691215Min: 7.48 / Avg: 7.73 / Max: 7.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b01243691215SE +/- 0.14, N = 311.7611.3811.43MIN: 11.68 / MAX: 12.81MIN: 10.89 / MAX: 61.71MIN: 11.37 / MAX: 11.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b01243691215Min: 11.17 / Avg: 11.38 / Max: 11.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface1240.74481.48962.23442.97923.724SE +/- 0.01, N = 33.312.653.28MIN: 2.63 / MAX: 133.12MIN: 2.59 / MAX: 7.17MIN: 2.74 / MAX: 117.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface124246810Min: 2.64 / Avg: 2.65 / Max: 2.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet124612182430SE +/- 0.29, N = 323.2321.5222.88MIN: 22.12 / MAX: 89.59MIN: 20.42 / MAX: 74.82MIN: 22.23 / MAX: 25.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet124510152025Min: 21.22 / Avg: 21.52 / Max: 22.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg1612420406080100SE +/- 0.28, N = 373.2974.4973.03MIN: 72.42 / MAX: 76.11MIN: 72.08 / MAX: 125.89MIN: 72.42 / MAX: 78.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg161241428425670Min: 74.12 / Avg: 74.49 / Max: 75.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18124510152025SE +/- 0.56, N = 319.5518.9520.14MIN: 18.23 / MAX: 86.21MIN: 18.26 / MAX: 37.54MIN: 18.41 / MAX: 40.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18124510152025Min: 18.36 / Avg: 18.95 / Max: 20.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet12448121620SE +/- 0.01, N = 314.5314.5614.55MIN: 14.41 / MAX: 16.68MIN: 14.42 / MAX: 25.41MIN: 14.48 / MAX: 15.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet12448121620Min: 14.54 / Avg: 14.56 / Max: 14.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50124918273645SE +/- 0.50, N = 338.2839.5638.57MIN: 38.03 / MAX: 50.12MIN: 37.96 / MAX: 74.13MIN: 38.16 / MAX: 41.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50124816243240Min: 38.57 / Avg: 39.56 / Max: 40.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny124918273645SE +/- 0.24, N = 341.3641.2640.58MIN: 37.18 / MAX: 105.21MIN: 35.15 / MAX: 50.66MIN: 36.73 / MAX: 46.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny124918273645Min: 40.86 / Avg: 41.26 / Max: 41.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd124714212835SE +/- 0.07, N = 331.1431.1231.17MIN: 29.83 / MAX: 38.85MIN: 29.61 / MAX: 90.13MIN: 29.79 / MAX: 47.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd124714212835Min: 31.01 / Avg: 31.12 / Max: 31.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m12448121620SE +/- 0.05, N = 317.7017.3917.57MIN: 17.04 / MAX: 82.03MIN: 16.94 / MAX: 111.71MIN: 17.28 / MAX: 72.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m12448121620Min: 17.33 / Avg: 17.39 / Max: 17.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: yolov4 - Device: OpenMP CPU1244080120160200SE +/- 0.17, N = 32052052051. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: yolov4 - Device: OpenMP CPU1244080120160200Min: 204.5 / Avg: 204.67 / Max: 2051. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: bertsquad-10 - Device: OpenMP CPU12470140210280350SE +/- 0.58, N = 33103103111. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: bertsquad-10 - Device: OpenMP CPU12460120180240300Min: 308.5 / Avg: 309.5 / Max: 310.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: fcn-resnet101-11 - Device: OpenMP CPU124918273645SE +/- 0.00, N = 33939391. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: fcn-resnet101-11 - Device: OpenMP CPU124816243240Min: 38.5 / Avg: 38.5 / Max: 38.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: shufflenet-v2-10 - Device: OpenMP CPU1243K6K9K12K15KSE +/- 48.97, N = 31361513784137551. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: shufflenet-v2-10 - Device: OpenMP CPU1242K4K6K8K10KMin: 13689 / Avg: 13783.5 / Max: 138531. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: super-resolution-10 - Device: OpenMP CPU1245001000150020002500SE +/- 3.98, N = 32314234723241. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: super-resolution-10 - Device: OpenMP CPU124400800120016002000Min: 2339.5 / Avg: 2347.33 / Max: 2352.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPC124816243240333333MIN: 3 / MAX: 449MIN: 3 / MAX: 450MIN: 3 / MAX: 454

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark Scalar12448121620181818MIN: 1 / MAX: 435MIN: 1 / MAX: 435MIN: 1 / MAX: 437

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.8Input: AUSURF112123430060090012001500SE +/- 5.66, N = 3SE +/- 5.94, N = 31358.681357.091361.041355.461. (F9X) gfortran options: -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.8Input: AUSURF11212342004006008001000Min: 1347.14 / Avg: 1357.09 / Max: 1366.75Min: 1350.77 / Avg: 1361.04 / Max: 1371.361. (F9X) gfortran options: -ldevXlib -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty12342004006008001000SE +/- 15.97, N = 15SE +/- 8.47, N = 31134.71103.91024.51042.2MIN: 851.79 / MAX: 2126.23MIN: 820.44 / MAX: 2541.02MIN: 826.33 / MAX: 2287.3MIN: 842.98 / MAX: 2237.87
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Scala Dotty12342004006008001000Min: 1022.54 / Avg: 1103.94 / Max: 1172.92Min: 1015.75 / Avg: 1024.54 / Max: 1041.48

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest12342004006008001000SE +/- 7.20, N = 3SE +/- 3.48, N = 3973.3958.1957.0974.2MIN: 889.42 / MAX: 1158.35MIN: 879.86 / MAX: 1189.49MIN: 857.47 / MAX: 1207.37MIN: 887.09 / MAX: 1190.87
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Random Forest12342004006008001000Min: 944.97 / Avg: 958.12 / Max: 969.79Min: 952.41 / Avg: 957 / Max: 963.81

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens12342K4K6K8K10KSE +/- 9.95, N = 3SE +/- 41.79, N = 310271.210283.210301.310395.1MAX: 11292.71MIN: 10268.99 / MAX: 11345.94MIN: 10218.5 / MAX: 11569.96MAX: 11540.63
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: ALS Movie Lens12342K4K6K8K10KMin: 10268.99 / Avg: 10283.16 / Max: 10302.35Min: 10218.5 / Avg: 10301.29 / Max: 10352.62

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS12346001200180024003000SE +/- 16.68, N = 3SE +/- 33.66, N = 32795.62771.32789.32816.9MIN: 2681.41 / MAX: 2996.2MIN: 2560.86 / MAX: 3077.84MIN: 2579.81 / MAX: 3193.59MIN: 2617.33 / MAX: 3046.89
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark ALS12345001000150020002500Min: 2739.65 / Avg: 2771.3 / Max: 2796.26Min: 2736.53 / Avg: 2789.28 / Max: 2851.88

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes12345001000150020002500SE +/- 16.65, N = 3SE +/- 17.69, N = 32108.12143.82223.52226.2MIN: 1569.46MIN: 1562.4 / MAX: 3586.17MIN: 1610.95 / MAX: 2982.19MIN: 1538.66 / MAX: 10685.31
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark Bayes1234400800120016002000Min: 2120.14 / Avg: 2143.83 / Max: 2175.94Min: 2196.99 / Avg: 2223.48 / Max: 2257.04

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO12342K4K6K8K10KSE +/- 146.25, N = 12SE +/- 123.24, N = 59913.69872.79864.510263.1MIN: 9913.57 / MAX: 15583.01MIN: 9023.31 / MAX: 15624.79MIN: 9577.17 / MAX: 14858.63MAX: 15575.43
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Savina Reactors.IO12342K4K6K8K10KMin: 9023.31 / Avg: 9872.67 / Max: 10814.17Min: 9577.17 / Avg: 9864.46 / Max: 10260.88

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank123411002200330044005500SE +/- 56.31, N = 12SE +/- 79.83, N = 124902.14711.14738.24985.4MIN: 4490.93 / MAX: 5076.83MIN: 3934.16 / MAX: 5235.62MIN: 3868.09 / MAX: 5249.23MIN: 4509.87 / MAX: 5089.15
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Apache Spark PageRank12349001800270036004500Min: 4366.24 / Avg: 4711.12 / Max: 4956.55Min: 4262.62 / Avg: 4738.16 / Max: 5179.89

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests12347001400210028003500SE +/- 13.89, N = 3SE +/- 7.79, N = 33082.73115.73115.33102.9MIN: 2849.13 / MAX: 3094.87MIN: 2849.84 / MAX: 3181.27MIN: 2842.24 / MAX: 3228.02MIN: 2873.39 / MAX: 3119.64
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Finagle HTTP Requests12345001000150020002500Min: 3087.93 / Avg: 3115.68 / Max: 3130.78Min: 3101.08 / Avg: 3115.34 / Max: 3127.92

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout123411002200330044005500SE +/- 54.68, N = 15SE +/- 79.58, N = 154651.75170.45064.04630.2MIN: 4050.47 / MAX: 5449.92MIN: 3875.97 / MAX: 7892.95MIN: 3897.41 / MAX: 9465.37MIN: 4002.29 / MAX: 5453.22
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: In-Memory Database Shootout12349001800270036004500Min: 4712.14 / Avg: 5170.4 / Max: 5460.16Min: 4489.68 / Avg: 5063.99 / Max: 5616.6

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree12343K6K9K12K15KSE +/- 28.57, N = 3SE +/- 182.97, N = 514253.114241.714386.614527.7MIN: 11211.62 / MAX: 14253.14MIN: 11182.38 / MAX: 14293.9MIN: 11130.48 / MAX: 15106.91MIN: 11587.58
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Akka Unbalanced Cobwebbed Tree12343K6K9K12K15KMin: 14195.48 / Avg: 14241.65 / Max: 14293.9Min: 14115.25 / Avg: 14386.6 / Max: 15106.91

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures12347001400210028003500SE +/- 15.18, N = 3SE +/- 39.12, N = 32961.22922.83007.93045.0MIN: 2918.73 / MAX: 3008.48MIN: 2839.55 / MAX: 2977.66MIN: 2883.99 / MAX: 3102.87MIN: 3011.23 / MAX: 3079.39
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.12Test: Genetic Algorithm Using Jenetics + Futures12345001000150020002500Min: 2892.51 / Avg: 2922.84 / Max: 2939.05Min: 2936.8 / Avg: 3007.94 / Max: 3071.73

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total Time124306090120150SE +/- 0.10, N = 3122.31122.65122.541. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total Time12420406080100Min: 122.49 / Avg: 122.65 / Max: 122.851. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 11.2.0Time To Compile124400800120016002000SE +/- 1.31, N = 31611.271668.161656.89
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 11.2.0Time To Compile12430060090012001500Min: 1665.54 / Avg: 1668.16 / Max: 1669.59

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile124306090120150SE +/- 1.11, N = 9128.75116.80134.25
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile124306090120150Min: 114.74 / Avg: 116.8 / Max: 125.49

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet1248001600240032004000SE +/- 1.74, N = 33657.063651.623655.99MIN: 3556.01 / MAX: 3721.82MIN: 3534.09 / MAX: 3733.22MIN: 3540.19 / MAX: 4008.211. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet1246001200180024003000Min: 3648.63 / Avg: 3651.62 / Max: 3654.661. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v212470140210280350SE +/- 0.81, N = 3326.19324.56324.39MIN: 305.03 / MAX: 347.92MIN: 301.98 / MAX: 368.9MIN: 310.19 / MAX: 3401. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v212460120180240300Min: 323.4 / Avg: 324.56 / Max: 326.121. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21241632486480SE +/- 0.16, N = 374.0273.6273.30MIN: 72.81 / MAX: 76.67MIN: 72.83 / MAX: 74.91MIN: 72.84 / MAX: 74.361. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v21241428425670Min: 73.35 / Avg: 73.62 / Max: 73.91. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.112460120180240300SE +/- 0.13, N = 3269.12269.46270.88MIN: 268.69 / MAX: 269.54MIN: 268.7 / MAX: 273.37MIN: 270.32 / MAX: 274.211. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.112450100150200250Min: 269.29 / Avg: 269.46 / Max: 269.721. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene1244080120160200SE +/- 0.44, N = 3193.77194.63192.901. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.5.1Total Time For Sample Scene1244080120160200Min: 194.12 / Avg: 194.63 / Max: 195.491. (CXX) g++ options: -std=c++11 -pthread -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype