Recent Changes - Search:

HomePage

PmWiki

pmwiki.org

InstallingOld

The program uses autoconf so you can use the usual ./configure && make && make install routine. Remember to set CXXFLAGS before running configure though; the most important flag is to set -DNDEBUG (without this the program will run slowly and produce lots of debug messages). If all necessary libraries are installed, and you are using GCC, then the following minimal commands will be enough to get the toolkit compiled in a directory called 'build'. (It is possible to compile the toolkit from the trunk/ directory, but that isn't recommended. In particular if you want to upload some files back into the toolkit then having lots of compiler output files in the source directory will be a mess.)

svn co svn://svn.physics.uq.edu.au/ianmcc/mptoolkit/trunk
mkdir build
cd build
export CXXFLAGS="-DNDEBUG -O2"
../trunk/configure
make
make install

This is a very minimal set of compiler flags, for better results add some more optimization flags depending on your hardware setup (for further optimization guides see MKLOptimization).

The toolkit compiles with gcc, icc and (as of 2012-05-11) clang.

To install the executables in ~/bin, run make install. If make install fails with a permission error, check that the "mkinstalldirs" program has execute permissions. Try chmod u+x mkinstalldirs.

The above commands don't make any models. If you want all possible models, use make models (and make models_install, if you want to install them), but this takes a while. You may prefer to build only the models that you need. eg, make spinchain-u1.

If this doesn't work, you may need to set up the C++ and fortran compilers via the environment variables CXX, CFLAGS, CXXFLAGS, F77, F77FLAGS. Also, check that you have the dependencies installed. For the configure step to work properly, you should generally set CFLAGS to be the same as your CXXFLAGS. However CFLAGS isn't used then compiling the toolkit itself.

Sometimes, svn doesn't give the correct executable permissions on some files. If you get an error that "depend.sh" or "mkinstalldirs" are not executable, then make sure they have the 'execute' flag set, eg with

chmod +x depend.sh
chmod +x mkinstalldirs

Required libraries:

  • GnuMP (arbitrary precision integer arithmetic) http://gmplib.org/
  • Boost (only a subset of boost is needed, the headers plus the program_options library) http://boost.org/
  • BLAS and LAPACK. Unfortunately, some Linux distributions ship only with the reference version of BLAS. This should be a criminal offense as it is usually much slower than an optimized version. If you have no pre-installed optimized BLAS library, try http://math-atlas.sourceforge.net/. Alternatives are the MKL or ACML.
  • Some tools require the ARPACK library. If you need to install this, try the packages from Andreas Klöckner at http://mathema.tician.de/software/arpack. The original (and hard to install) package is hosted at http://www.caam.rice.edu/software/ARPACK/. Most Linux distributions now offer arpack as standard, in the form of arpack-ng.

Environment variables

Some tools make use of environment variables:

  • MP_PAGESIZE : This is the 'page size' used to store wavefunction files, and occasionally, checkpoint files. It should be set to an integer, and must be a multiple of the operating system page size. The default is 65536 (64K). In environments that have a high-performance network this should be set to the optimal transfer unit, which may be much bigger (eg, on an old AlphaCluster SC with a Quadrics network, the optimal transfer unit was 8MB).
  • MP_CACHESIZE : This is the size of the internal page cache, in bytes. In a machine with excess memory, it will improve performance somewhat if this is set as large as possible. Default is 655360 (640K), so it is generally much better to set this bigger (eg, 104857600 (100M) or larger).
  • MP_BINPATH : Many tools use additional disk space during the computation, that is not needed for the final output. Where possible, the actual output file is used temporarily, before being truncated at the end of the calculation. Where this is not possible (say, if the tool has no output wavefunction), a temporary file is created with the prefix MP_BINPATH, which could specify a directory. Default is to create the temporary file in the current directory.

Troubleshooting

The new fortran compiler in gcc-4 seems to not work very well with the fortran configure macros. It might be necessary to manually specify libg2c in the linker flags (libg2c doesn't exist in the g95 package, but its replacement, libgfortran, seems to be missing some symbols). For me, export LDFLAGS=/usr/lib/gcc/x86_64-pc-linux-gnu/3.4.5/libg2c.a did the trick, to link against the g2c library from an older version of gcc.

Compiling with gcc-4 and -pedantic gives lots of warnings of the form "warning: dereferencing type-punned pointer will break strict-aliasing rules". These occur only in some fortran (BLAS/LAPACK) prototypes, and are harmless.

Using up-to-date compiler version on the lxtccl2 cluster

Currently (12.07.2008) gcc-4.2.3 in combination with acml4.1.0 can be used, but compiling works only on node19, the binaries can be executed on all nodes.

For the configure-scripts to find the @gcc@ you want, you need to modify the \$PATH variable:

export PATH=/usr/local/opt/gcc-4.2.3/bin:$PATH

To make the programs actually run, you also need to adjust \$LD_LIBRARY_PATH prior to execution (also in your job scripts!):

export LD_LIBRARY_PATH=$HOME/lib:/usr/local/opt/acml4.1.0/gfortran64_mp/lib:\
        /usr/local/opt/gcc-4.2.3/lib64:/usr/local/opt/gcc-4.2.3/lib

With that boost can be compiled and installed as demonstrated below. The mptoolkit you can prepare with (adjust versions and private paths as apropriate):

export CXXFLAGS="-DNDEBUG -O3 -march=k8"
buildir@node19> path-to-configure --with-blas="-L/usr/local/opt/acml4.1.0/gfortran64_mp/lib \
      -lacml_mp -lacml_mv" --with-boost=\$HOME/include/boost-1_35 --with-boost-program-otions="-L\$HOME/lib \
      -lboost_program_options-gcc42-mt"

After that just make whatever you like.

Example install for the lxtccl2 cluster

This cluster already has most software that we need, the only thing we need to do is install boost. Grab boost-1.33.1 from http://boost.org/. Unzip, configure and install:

@master:~> tar xfz boost_1_33_1.tar.gz
@master:~> cd boost_1_33_1
@master:~/boost_1_33_1> ./configure --prefix=\$HOME --with-libraries=program_options
@master:~/boost_1_33_1> make
@master:~/boost_1_33_1> make install

This will install the boost program_options library into \$HOME/lib. The final part of the install is to make sure the boost include files can be found by the compiler. One option would be to add -I\$HOME/boost_1_33_1 to the \$CXXFLAGS environment variable. Another option is to create a symlink from your home directory, so that \$HOME/boost points to \$HOME/boost_1_33_1/boost. In that location, the mptoolkit configure script will find the includes automatically.

@master:~> cd
@master:~> ln -s boost_1_33_1/boost boost

Unfortunately, the cluster does not have subversion installed on it so we cannot directly download the toolkit. From some other machine with svn installed, make an export'ed copy of the tree:

The lxtccl2 cluster now has subversion installed on it. So Download it as usual, by a svn checkout from the server.

Before we run ./configure, we must set some appropriate compiler flags. A basic set would be

@master:~> export CXXFLAGS="-DNDEBUG -O3 -march=k8"

Without the -DNDEBUG, the program will do lots of additional internal error checking and be quite slow. You might want this if there is some problem and you want to debug it, but for production calculations -DNDEBUG is very important. For architectures other than AMD64, substitute whatever is appropriate for the -march= option. There are lots of other gcc options that might (or might not) improve the performance, see Main.GccOptions.

Several BLAS and LAPACK libraries are installed on this machine, so we need to specify which one we want to use. The AMD acml library seems to work OK. To use it we can use the --with-blas configure option.

@master:~/mptoolkit> ./configure --with-blas="-L/usr/local/opt/acml3.1.0/gnu64/lib -lacml -lacml_mv"
@master:~/mptoolkit> make
@master:~/mptoolkit> make install

Note: if you are using pedantic options to gcc, you might encounter a minor bug in boost:

boost/mpl/print.hpp:62: error: comma at end of enumerator list

This is fixed by simply removing the comma, the actual line where the comma appears is line 60.

The default make target only makes the tools, it does not make any model programs. If you want to make all models (takes a while), use make models install-models.

At the end of this process, all of the programs should be installed in \$HOME/bin. If this is not already in your \$PATH variable, now is a good time to add it (via a login script is the best!). However, if we actually run one of the programs, we will get an error similar to

mp-dmrg: error while loading shared libraries: libacml_mv.so: cannot open shared object file: No such file or directory

The reason for this is that we told the compiler where to find the BLAS library, but we need to tell the runtime loader as well. This problem does not occur if the libraries are installed in a standard location. But to get around this,

@master:~/mptoolkit> export LD_LIBRARY_PATH="/usr/local/opt/acml3.1.0/gnu64/lib:\$HOME/lib"

We also added \$HOME/lib, which is where we installed the boost libraries. the LD_LIBRARY_PATH is another variable that is best initialized in a login script. *** You must make sure that LD_LIBRARY_PATH is initialized on the nodes as well, otherwise batch jobs will fail to start. Adding the export LD_LIBRARY_PATH=... line to your .bashrc (or .cshrc) file will solve this problem once and for all.

Example install on the Solaris cluster cluster-sol10.rz.rwth-aachen.de

On this machine, we try using the Solaris C++ compiler. The GMP library is installed in /opt/csw. We need to set the CC, F77 and CFLAGS environment variables also, because `configure' uses the the C compiler for some tests. The BLAS library is contained in liblsunperf. You will need to build boost as usual.

export CXX=CC
export CC=cc
export F77=f77
export CXXFLAGS="-fast -DNDEBUG"
export CFLAGS="-fast -DNDEBUG"
./configure --with-gmp="/opt/csw" --with-blas="-lsunperf" --with-boost="$HOME/boost_1_33_1" \
  --with-boost-program-options="-L$HOME/lib -lboost_program_options-gcc"

Unfortunately, this still fails because the Sun compiler fails to compile some boost headers.

Using gcc, the final working configure line was

configure --with-gmp="/opt/csw" --with-blas="-L/opt/Studio11/SUNWspro/lib/v8plus \
    -lsunperf" --with-boost="$HOME/boost_1_33_1" \
    --with-boost-program-options="-L$HOME/lib -lboost_program_options-gcc"

Remember to use gmake, not the horribly broken Sun version of make.

Example install on the Linux cluster cluster-linux.rz.rwth-aachen.de

This should be straightforward, once Boost and BLAS libraries are configured properly. I'm not sure what the default BLAS library is, but I don't trust it. Try --with-blas="-L/usr/local_rwth/lib -lacml_gnu64". To be able to use the Sun and Linux halves of the cluster simultaneously, I made a subdirectory \$HOME/lib/x86_64 where I put the boost library files (leaving the headers in \$HOME/boost_1_33_1) and used --with-boost=\$HOME/boost_1_33_1 --with-boost-program-options="-L\$HOME/lib/x86_64 -lboost_program_options-gcc".

Intel's MKL and the toolkit

It is quite straightforward to compile the toolkit against the Math Kernel Library provided by Intel (see MKLOptimization). If you want to do so, it might be worthwhile to take a look at the linking advisor found here

Explicit walkthroughs for several systems

All these instructions assume you are capable of adjusting your runtime environment for your production jobs according to the installation environment, especially when dealing with different machine architectures. These examples use \$(uname -m) to build hardware dependent paths and cover only the installation process NOT the setup of a working queueing system environment.

Edit - History - Print - Recent Changes - Search
Page last modified on October 06, 2015, at 05:31 AM