Skip to content

Commit

Permalink
Add dockerfile and usage to README
Browse files Browse the repository at this point in the history
  • Loading branch information
tahsinkose committed Jun 23, 2019
1 parent ad709c6 commit cf38e2a
Show file tree
Hide file tree
Showing 2 changed files with 43 additions and 0 deletions.
32 changes: 32 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
FROM nvidia/cuda:8.0-cudnn7-devel-ubuntu16.04
LABEL author=github/tahsinkose

ADD . $HOME/graphnn
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install gcc g++ gfortran wget cpio && \
cd /tmp && \
wget http://registrationcenter-download.intel.com/akdlm/irc_nas/tec/15275/l_mkl_2019.3.199.tgz && \
tar -xzf l_mkl_2019.3.199.tgz && \
cd l_mkl_2019.3.199 && \
sed -i 's/ACCEPT_EULA=decline/ACCEPT_EULA=accept/g' silent.cfg && \
sed -i 's/ARCH_SELECTED=ALL/ARCH_SELECTED=INTEL64/g' silent.cfg && \
sed -i 's/COMPONENTS=DEFAULTS/COMPONENTS=;intel-comp-l-all-vars__noarch;intel-comp-nomcu-vars__noarch;intel-openmp__x86_64;intel-tbb-libs__x86_64;intel-mkl-common__noarch;intel-mkl-installer-license__noarch;intel-mkl-core__x86_64;intel-mkl-core-rt__x86_64;intel-mkl-doc__noarch;intel-mkl-doc-ps__noarch;intel-mkl-gnu__x86_64;intel-mkl-gnu-rt__x86_64;intel-mkl-common-ps__noarch;intel-mkl-core-ps__x86_64;intel-mkl-common-c__noarch;intel-mkl-core-c__x86_64;intel-mkl-common-c-ps__noarch;intel-mkl-tbb__x86_64;intel-mkl-tbb-rt__x86_64;intel-mkl-gnu-c__x86_64;intel-mkl-common-f__noarch;intel-mkl-core-f__x86_64;intel-mkl-gnu-f-rt__x86_64;intel-mkl-gnu-f__x86_64;intel-mkl-f95-common__noarch;intel-mkl-f__x86_64;intel-mkl-psxe__noarch;intel-psxe-common__noarch;intel-psxe-common-doc__noarch;intel-compxe-pset/g' silent.cfg && \
./install.sh -s silent.cfg && \
cd .. && rm -rf * && \
rm -rf /opt/intel/.*.log /opt/intel/compilers_and_libraries_2019.3.199/licensing && \
echo "/opt/intel/mkl/lib/intel64" >> /etc/ld.so.conf.d/intel.conf && \
ldconfig && \
echo "source /opt/intel/mkl/bin/mklvars.sh intel64" >> /etc/bash.bashrc

RUN apt-get -y install libtbb-dev

RUN apt-get install -y mesa-utils -y module-init-tools
# install nvidia driver
RUN apt-get install -y binutils
ADD NVIDIA*.run /tmp/NVIDIA-DRIVER.run
RUN sh /tmp/NVIDIA-DRIVER.run -a -N --ui=none --no-kernel-module
RUN rm /tmp/NVIDIA-DRIVER.run

RUN apt-get install -y nano
ENV CUDA_HOME=/usr/local/cuda
ENV LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,17 @@ Tested under Ubuntu 14.04, 16.04 and Mac OSX 10.12.6

source {path_to_your_intel_root/name_of_parallel_tool_box}/bin/psxevars.sh


#### Docker
Dockerfile contains all the required installations (including Intel MKL and TBB) above. Only additional requirement is to provide `NVIDIA*.run` script that will load the same NVIDIA driver of host into the target. Then to build the container, execute:

docker build -t "graphnn:test" .

To run it:

docker run --runtime=nvidia graphnn:test bash

If above command fails for a reason, refer to https://github.com/NVIDIA/nvidia-docker. If no error occurs, you can simply follow the below instructions and execute them in the container without failure.
#### Build static library

cp make_common.example make_common
Expand Down

0 comments on commit cf38e2a

Please sign in to comment.