Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
FFmpeg
x265
Newest Tests
Rustls
LiteRT
WarpX
Epoch
Valkey
Whisperfile
Recently Updated Tests
ACES DGEMM
NWChem
SuperTuxKart
ASTC Encoder
SVT-AV1
Unvanquished
New & Recently Updated Tests
Recently Updated Suites
Database Test Suite
Machine Learning
Steam
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
oneDNN 3.3.0
pts/onednn-3.3.0
- 12 October 2023 -
Update against oneDNN 3.3 upstream.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/oneapi-src/oneDNN/archive/refs/tags/v3.3.tar.gz</URL> <MD5>5acbe7eb86000fa1148735d830ba8409</MD5> <SHA256>8d150a77025f38bff182aaef4dd643625563b2f311c635f86cf4b769b04d7b48</SHA256> <FileName>oneDNN-3.3.tar.gz</FileName> <FileSize>11573780</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/sh tar -xf oneDNN-3.3.tar.gz cd oneDNN-3.3 mkdir build cd build CFLAGS="-O3 -march=native $CFLAGS" CXXFLAGS="-O3 -march=native $CXXFLAGS" cmake -DCMAKE_BUILD_TYPE=Release MKLDNN_ARCH_OPT_FLAGS="-O3 -march=native $CFLAGS" $CMAKE_OPTIONS .. make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cd ~ echo "#!/bin/bash export DNNL_CPU_RUNTIME=OMP export OMP_PLACES=cores export OMP_PROC_BIND=close cd oneDNN-3.3/build/tests/benchdnn ./benchdnn \$4 --mode=p \$1 \$3 \$2 > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status" > onednn chmod +x onednn
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>total perf min(ms) #_MIN_RESULT_# avg(ms) #_RESULT_#</OutputTemplate> <LineHint>total perf</LineHint> <TurnCharsToSpace>:</TurnCharsToSpace> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <TestInformation> <Title>oneDNN</Title> <AppVersion>3.3</AppVersion> <Description>This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit.</Description> <ResultScale>ms</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>3.3.0</Version> <SupportedPlatforms>Linux, MacOSX</SupportedPlatforms> <SoftwareType>Utility</SoftwareType> <TestType>Processor</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>build-utilities, cmake</ExternalDependencies> <EnvironmentSize>287</EnvironmentSize> <ProjectURL>https://www.intel.com/content/www/us/en/developer/tools/oneapi/onednn.html</ProjectURL> <RepositoryURL>https://github.com/oneapi-src/oneDNN</RepositoryURL> <InternalTags>SMP</InternalTags> <Maintainer>Michael Larabel</Maintainer> </TestProfile> <TestSettings> <Option> <DisplayName>Harness</DisplayName> <Identifier>harness</Identifier> <Menu> <Entry> <Name>Convolution Batch Shapes Auto</Name> <Value>--conv --batch=inputs/conv/shapes_auto</Value> </Entry> <Entry> <Name>Deconvolution Batch shapes_1d</Name> <Value>--deconv --batch=inputs/deconv/shapes_1d</Value> </Entry> <Entry> <Name>Deconvolution Batch shapes_3d</Name> <Value>--deconv --batch=inputs/deconv/shapes_3d</Value> </Entry> <Entry> <Name>IP Shapes 1D</Name> <Value>--ip --batch=inputs/ip/shapes_1d</Value> </Entry> <Entry> <Name>IP Shapes 3D</Name> <Value>--ip --batch=inputs/ip/shapes_3d</Value> </Entry> <Entry> <Name>Recurrent Neural Network Training</Name> <Value>--rnn --batch=inputs/rnn/perf_rnn_training</Value> </Entry> <Entry> <Name>Recurrent Neural Network Inference</Name> <Value>--rnn --batch=inputs/rnn/perf_rnn_inference_lb</Value> </Entry> </Menu> </Option> <Option> <DisplayName>Data Type</DisplayName> <Identifier>data-type</Identifier> <ArgumentPrefix>--cfg=</ArgumentPrefix> <Menu> <Entry> <Name>f32</Name> <Value>f32</Value> </Entry> <Entry> <Name>u8s8f32</Name> <Value>u8s8f32</Value> <Message>Optimized For AVX-512</Message> </Entry> <Entry> <Name>bf16bf16bf16</Name> <Value>bf16bf16bf16</Value> <Message>Optimized For AVX-512 + VNNI</Message> </Entry> </Menu> </Option> <Option> <DisplayName>Engine</DisplayName> <Identifier>engine</Identifier> <Menu> <Entry> <Name>CPU</Name> <Value>--engine=cpu</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>