Difference between revisions of "How to compile Gromacs"
Line 1: | Line 1: | ||
+ | ==Gromacs on College Cluster== | ||
+ | I am sorry to inform you (formally) that one of the College Cluster account that you are using - '''cchan2242''' is currently not suitable for running MPI jobs. I have done some test runs with accounts '''jialema2''' and '''xinhonliu2''' and they are both fine. I attach you the simple pbs script and please read carefully the comments: | ||
+ | |||
+ | #PBS -S /bin/bash | ||
+ | #PBS -N test | ||
+ | #PBS -l nodes=2:ppn=16 | ||
+ | #PBS -r n | ||
+ | #PBS -q fast | ||
+ | # | ||
+ | |||
+ | # Though Intel libraries are either not used or accessed in another way (described below), it is still safer to source them first | ||
+ | source /home/intel/composerxe/bin/compilervars.sh intel64 | ||
+ | source /home/intel/composer_xe_2013.3.163/mkl/bin/mklvars.sh intel64 | ||
+ | |||
+ | # As it is a shared library, you may save your time by accessing a pre-compiled fftw library instead of compiling a new one | ||
+ | export LD_LIBRARY_PATH=/nfs/disk3/cchan2242/opt/fftw-3.3.4-shared/lib:$LD_LIBRARY_PATH | ||
+ | |||
+ | # This part comes directly from your old pbs | ||
+ | cd $PBS_O_WORKDIR | ||
+ | cat $PBS_NODEFILE >> /tmp/nodefile.$$ | ||
+ | NP=`cat $PBS_NODEFILE | wc -l` | ||
+ | NN=`cat $PBS_NODEFILE | sort | uniq | tee /tmp/nodes.$$ | wc -l` | ||
+ | |||
+ | # Define variables so you will actually use an absolute path to call the commands | ||
+ | # Note that you have to compile a new openmpi for each account | ||
+ | # For GMX commands, you can access one that compiled by another user (Just note the permission stuff) | ||
+ | MPIRUN=/nfs/disk3/xinhonliu2/opt/openmpi-1.8.6/bin/mpirun | ||
+ | GMXDIR=/nfs/disk3/cchan2242/opt/gromacs-4.6.3-MPI-double/bin | ||
+ | |||
+ | $MPIRUN -hostfile $PBS_NODEFILE $GMXDIR/mdrun_mpi_d -v -deffnm md | ||
+ | |||
+ | As we have tested out during this afternoon, '''some mkl libraries have to be placed under the submission directory''' and they are from /home/intel/composer_xe_2013.3.163/composer_xe_2013.3.163/mkl/lib/intel64: | ||
+ | libmkl_core.so | ||
+ | libmkl_gnu_thread.so | ||
+ | libmkl_intel_lp64.so | ||
+ | |||
+ | |||
+ | |||
==Gromacs 5.0.6 MPI, single, GPU== | ==Gromacs 5.0.6 MPI, single, GPU== | ||
Download source code to /share/apps/download | Download source code to /share/apps/download |
Revision as of 20:57, 16 September 2015
Contents
[hide]Gromacs on College Cluster
I am sorry to inform you (formally) that one of the College Cluster account that you are using - cchan2242 is currently not suitable for running MPI jobs. I have done some test runs with accounts jialema2 and xinhonliu2 and they are both fine. I attach you the simple pbs script and please read carefully the comments:
#PBS -S /bin/bash #PBS -N test #PBS -l nodes=2:ppn=16 #PBS -r n #PBS -q fast # # Though Intel libraries are either not used or accessed in another way (described below), it is still safer to source them first source /home/intel/composerxe/bin/compilervars.sh intel64 source /home/intel/composer_xe_2013.3.163/mkl/bin/mklvars.sh intel64 # As it is a shared library, you may save your time by accessing a pre-compiled fftw library instead of compiling a new one export LD_LIBRARY_PATH=/nfs/disk3/cchan2242/opt/fftw-3.3.4-shared/lib:$LD_LIBRARY_PATH # This part comes directly from your old pbs cd $PBS_O_WORKDIR cat $PBS_NODEFILE >> /tmp/nodefile.$$ NP=`cat $PBS_NODEFILE | wc -l` NN=`cat $PBS_NODEFILE | sort | uniq | tee /tmp/nodes.$$ | wc -l` # Define variables so you will actually use an absolute path to call the commands # Note that you have to compile a new openmpi for each account # For GMX commands, you can access one that compiled by another user (Just note the permission stuff) MPIRUN=/nfs/disk3/xinhonliu2/opt/openmpi-1.8.6/bin/mpirun GMXDIR=/nfs/disk3/cchan2242/opt/gromacs-4.6.3-MPI-double/bin $MPIRUN -hostfile $PBS_NODEFILE $GMXDIR/mdrun_mpi_d -v -deffnm md
As we have tested out during this afternoon, some mkl libraries have to be placed under the submission directory and they are from /home/intel/composer_xe_2013.3.163/composer_xe_2013.3.163/mkl/lib/intel64:
libmkl_core.so libmkl_gnu_thread.so libmkl_intel_lp64.so
Gromacs 5.0.6 MPI, single, GPU
Download source code to /share/apps/download
Read the Installation_Instructions
tar -zxvf ../gromacs-5.0.6.tar.gz
cd gromacs-5.0.4
mkdir build-gmx
cd build-gmx
export MPICCDIR=/share/apps/openmpi/bin/
cmake .. \
-DGMX_DOUBLE=OFF \
-DCMAKE_INSTALL_PREFIX=/share/apps/gmx-single-MPI-CUDA \
-DGMX_GPU=ON \
-DCUDA_TOOLKIT_ROOT_DIR=/share/apps/cuda/ \
-DGMX_MPI=ON \
-DCMAKE_CXX_COMPILER=g++ \
-DCMAKE_CXX_COMPILER=${MPICCDIR}mpicxx \
-DCMAKE_C_COMPILER=gcc \
-DCMAKE_C_COMPILER=${MPICCDIR}mpicc \
-DGMX_BUILD_OWN_FFTW=ON
make -j16
make install
Gromacs 5.0.4 MPI, single, GPU
Download source code to /share/apps/download
Read the Installation_Instructions
tar -zxvf ../gromacs-5.0.4.tar.gz
cd gromacs-5.0.4
mkdir build-gmx
cd build-gmx
export MPICCDIR=/share/apps/openmpi/bin/
cmake .. \
-DGMX_DOUBLE=OFF \
-DCMAKE_INSTALL_PREFIX=/share/apps/gmx-single-MPI-CUDA \
-DGMX_GPU=ON \
-DCUDA_TOOLKIT_ROOT_DIR=/share/apps/cuda/ \
-DGMX_MPI=ON \
-DCMAKE_CXX_COMPILER=${MPICCDIR}mpicxx \
-DCMAKE_C_COMPILER=${MPICCDIR}mpicc \
-DGMX_BUILD_OWN_FFTW=ON
make -j 8
make install
Points to note: * As we only have fftw3 library compiled with double precision so we have to add -DGMX_BUILD_OWN_FFTW=ON * Version info at here
Gromacs 5.0.4 MPI, double
Download source code to /share/apps/download
Read the Installation_Instructions
tar -zxvf ../gromacs-5.0.4.tar.gz
cd gromacs-5.0.4
mkdir build-gmx
cd build-gmx
export MPICCDIR=/share/apps/openmpi/bin
CMAKE_PREFIX_PATH=/share/apps/fftw3 \
cmake .. \
-DGMX_DOUBLE=ON \
-DCMAKE_INSTALL_PREFIX=/share/apps/gmx-double-MPI
-DGMX_X11=OFF \
-DCMAKE_CXX_COMPILER=${MPICCDIR}/mpicxx \
-DCMAKE_C_COMPILER=${MPICCDIR}/mpicc \
-DGMX_MPI=ON \
-DGMX_PREFER_STATIC_LIBS=ON
make -j 8
make install
Points to note: * Version info at here
Gromacs 5.0.4 CUDA, single
Download source code to /share/apps/download
Read the Installation_Instructions
tar -zxvf ../gromacs-5.0.4.tar.gz
cd gromacs-5.0.4
mkdir build-gmx-cuda
cd build-gmx-cuda
cmake .. \
-DGMX_DOUBLE=OFF \
-DCMAKE_INSTALL_PREFIX=/share/apps/gmx-single-CUDA \
-DGMX_GPU=ON \
-DCUDA_TOOLKIT_ROOT_DIR=/share/apps/cuda \
-DGMX_BUILD_OWN_FFTW=ON
make -j 8
make install
Points to note: * As we only have fftw3 library compiled with double precision so we have to add -DGMX_BUILD_OWN_FFTW=ON * As MPI was not enabled, this build runs only on single machine. It is about to be replaced by MPI-enabled GPU-enabled build. * Version info at here
Gromacs 5.0.4 thread-MPI, double
Download source code to /share/apps/download
Read the Installation_Instructions
tar -zxvf ../gromacs-5.0.4.tar.gz
cd gromacs-5.0.4
mkdir build-gmx
cd build-gmx
CMAKE_PREFIX_PATH=/share/apps/fftw3 \
cmake .. \
-DGMX_DOUBLE=ON \
-DCMAKE_INSTALL_PREFIX=/share/apps/gmx-double-multicore
make -j 8
make install
Points to note: * This is the safest build and only runs on a single machine but multiple cores. * Version info at here
[Oldest]Gromacs MPI Compilation
Download gromacs-4.5.5.tar.gz into /usr/local/src/gromacs
Read the Installation_Instructions
1. Load Environment Modules:
module add intel/compiler/64/11.1/075
module add mvapich/intel/64/1.1
module add fftw3/intel/64/3.2.2
2. Set environment variables
export CCDIR=/cm/shared/apps/intel/Compiler/11.1/075/bin/intel64
export FFTW_LOCATION=/cm/shared/apps/fftw/intel/64/3.2.2
export MPICCDIR=/cm/shared/apps/mvapich/intel/64/1.1/bin
export CXX=mpicxx
export CC=mpicc
3. Setup the build environment
cd /usr/local/src/gromacs/
tar -zxvf ../gromacs-4.5.5.tar.gz
mv gromacs-4.5.5 gromacs-4.5.5-serial
mkdir build-serial
cd build-serial
4. Make the servial version
cmake ../gromacs-4.5.5-mpi \
-DFFTW3F_INCLUDE_DIR=$FFTW_LOCATION/include \
-DFFTW3F_LIBRARIES=$FFTW_LOCATION/lib/libfftw3f.a \
-DCMAKE_INSTALL_PREFIX=/cm/shared/apps/gromacs \
-DGMX_X11=OFF \
-DCMAKE_CXX_COMPILER=${MPICCDIR}/mpicxx \
-DCMAKE_C_COMPILER=${MPICCDIR}/mpicc \
-DGMX_MPI=ON \
-DGMX_PREFER_STATIC_LIBS=ON
make
make install-mdrun (WORKS & Installed)