Installing Boost 1.56 with MPI (Ubuntu 12.04, 13.04, 14.04, Fedora)

Finally, one of the my favourite aspects of C++. Boost is one of the most well received additions to the C++ standard library that is out there. So much so in fact that you can see a lot of its implementation absorbed into the new standard, C++11; an example in point been shared pointers, initialisation of static containers such as std::map, the foreach iterator, boost::array is now std::array, addition of unordered sets and of course this is just a drop in the ocean of what else is now included. Boost is well developed and has been well supported over the years. Covering boost would take forever and a day and I am certainly no authority on the subject, however, to follow on from the promise in the last blog, and show you how to install this wonderful toolkit.

Boost is extremely simple to install, as usual you can try the “sudo apt-cache search boost” (Ubuntu) or “yum search boost” (Fedora) first and see if there are libraries already pre-compiled for your distro. As you may have guessed by now I prefer the manual install as you get the latest release where usually the repo is a few versions behind. For a full install of everything you will also need python libraries and icu libraries (if you want regex pattern matching) which are easily got via the “apt-get” method and in this script. Navigate to the Boost homepage and download the release, copy the below once inside the downloaded directory and run the script, it will install Boost in your /usr/local area as per usual. Please read the script NEVER EVER install something if you are unsure, however it is quite simple to follow and follows the basic instructions on the Boost webpages!! As of 25/07/2013 the script worked for Ubuntu 13.04.

# Get the version of Boost that you require. This is for 1.54 but feel free to change or manually download yourself
wget -O boost_1_56_0.tar.gz
tar xzvf boost_1_56_0.tar.gz
cd boost_1_56_0/
# Now we are inside the boost directory we can get the installation script and execute it.
chmod +x

The script is given here if you wish to cut and paste it.

# Matthew M Reid 10/01/2013. Please use and distribute under GNU licence.
# This script will compile boost on the maximum number of physical cores.
# If you plan to build the Parallel Graph Libraries you may find the
# following warning: Graph library does not contain MPI-based parallel components.
# note: to enable them, add "using mpi ;" to your user-config.jam. The script does this for you.

# Get the required libraries, main ones are icu for boost::regex support
echo "Getting required libraries..."
sudo apt-get update
sudo apt-get install build-essential g++ python-dev autotools-dev libicu-dev libbz2-dev libzip-dev

./ --prefix=$installDir

# pipe "using mpi ;" to the config file so that mpi is enabled
user_configFile=`find $PWD -name project-config.jam`
mpicc_exe=`which mpic++` # required incase the mpi path is not in root
echo "using mpi : $mpicc_exe ;" >> $user_configFile

# Build using maximum number of physical cores
n=`cat /proc/cpuinfo | grep "cpu cores" | uniq | awk '{print $NF}'`

# begin install
sudo ./b2 --with=all -j $n cxxflags="-std=c++11" --target=shared,static install 

sudo echo "$installDir/lib" >> /etc/"
sudo ldconfig -v

This will install Boost.MPI. If you do not want or require the MPI aspect of this installation then comment out the line starting with ‘”echo “using mpi ;”‘ by placing a hash, #, before the echo command. That’s it! One point to note is to make an entry in your .bashrc for easy compilation; export BOOSTROOT=/usr/local/boost-1.56.0. For those who installed with the MPI option, you can run a test by using the code I provide below. So like I mentioned in a previous blog serialisation is one of the coolest things about Boost.MPI, more on that once I get a minute to write something interesting.

// M. Reid - Boost.cpp - Test of boost mpi interface

#include <iostream>

// The boost headers
#include "boost/mpi.hpp"

int main(int argc, char* argv[])
    // Allows you to query the MPI environment
    boost::mpi::environment env( argc, argv );
    std::string processor_name( env.processor_name() );
    // permits communication and synchronization among a set of processes
    boost::mpi::communicator world;
    unsigned int rank( world.rank() ), numprocessors( world.size() );
    if ( rank == 0 ) {
        std::cout << "Processor name: " << processor_name << "\n";
	std::cout << "Master (" << rank << "/" << numprocessors << ")\n";
    } else {
	std::cout << "Slave  (" << rank << "/" << numprocessors << ")\n";
    return 0;

To compile you can check any required library names by looking in your /usr/local/lib/libboost_xxxx, from there you find the one you want and remove the “lib” and any subsequent numbers at the end. To check the installation has worked, try the compiling the code posted above “Boost.cpp” with the following,

mpic++ -W -Wall Boost.cpp -o Boost -lboost_mpi -lboost_serialization -lboost_system -lboost_filesystem -lboost_graph_parallel -lboost_iostreams
matt@matt-W250ENQ-W270ENQ:$ mpirun -np 4 Boost
Processor name: matt-W250ENQ-W270ENQ
Master (0/4)
Slave  (1/4)
Slave  (2/4)
Slave  (3/4)

Or stick that in a makefile or cmake (I will eventually get around to doing something on cmake, ping me if you are interested) or which ever your favourite compiler aid may be, clearly several of these libraries are redundant in this example since we do not require linking to the graph, filesystem, serialization, iostreams or system libraries in order for this code to work. Enjoy, more to come in due course.

Installing Open MPI 1.6.5 (Ubuntu 12.04, 13.04, fedora)

The OpenMPI logo. A box filled with useful tricks!
Open MPI ( )  is one of the most liberating tools out there. In a world where time is of the essence and most computers these days have more than one core, why let them sit around idle doing nothing, put them to good use!!

It stands for Message Parsing Interface (MPI) and allows you to do a whole manner of parallelised computing applications. For an in-depth idea of what it can do visit the website but a few key things that are useful to know are:

  1. Collective functions: Main example is the MPI Reduce function, which allows you to perform simple operations such as a summation using the family of processors to do so.
  2. Point-to-Point communication: Most common example I can think of is the use of a head node splitting a dataset into smaller memory chunks based on the number of sub-processors that are available, where each will then compute the same task in parallel. This operations protocol is usually called a master-to-slave process.
  3. Communicators: These connect all the MPI processes into groups, called process groups. An example is the world communicator which contains attributes such as the size (number of processors) and rank (ordered topology) of the group. They also allow for the manipulation of process groups.

How to Install
Open MPI is relatively simple to install, I should point out I have not tried this on a Fedora machine but as long as you have the same libraries/dependencies then it should be the same procedure. There are two methods; the first (and not my preferred method!) is to simply find the latest version from the on-line repositories as of today the latest version was 1.6.5. To do this you can

sudo apt-get update
sudo apt-get install -y autotools-dev g++ build-essential openmpi1.5-bin openmpi1.5-doc libopenmpi1.5-dev
# remove any obsolete libraries
sudo apt-get autoremove

That should get you what you need, if you are on Fedora simply “sudo yum search openmpi” that should bring you up something similar to openmpi, openmpi-devel. The next method and my preferred way as you get the most recent update version(currently 1.6.5) is to take it directly from the website. The below script worked for me on Ubuntu 13.04 tested on 25/07/2013.

The script below will install Open MPI in your /usr/local area, this can be modified by changing the parameter installDIR in the script to the desired location. After install the libraries are placed in $installDIR/lib/openmpi and you can now begin playing with Open MPI. One thing to note is that I apply ldconfig to the /usr/local/lib, this is a much better method than setting paths explicitly. To do this you need to modify your directory to make it look in the /usr/local/xxx area if it doesn’t already. Now with Ubuntu you may already have this linked up so check if your machine has a file called “/etc/” and a path explicitly showing “/usr/local/lib” and that should be all. If you do then you can ignore this step, else you can add the path using the following:

sudo echo "/usr/local/lib" >> /etc/
sudo ldconfig -v

I prefer this method as you do not have to keep adding things to your LD_LIBRARY_PATH all the time which is not really recommended, see for a couple of examples for the case against setting this path. I should have mentioned this in previous blogs too!!

# Matthew M Reid. install open mpi shell script

# install destination
# First get necessary dependencies
sudo apt-get update
sudo apt-get install -y gfortran autotools-dev g++ build-essential autoconf automake 
# remove any obsolete libraries
sudo apt-get autoremove

# Build using maximum number of physical cores
n=`cat /proc/cpuinfo | grep "cpu cores" | uniq | awk '{print $NF}'`

# grab the necessary files
tar xzvf openmpi-1.6.5.tar.gz
cd openmpi-1.6.5

echo "Beginning the installation..."
./configure --prefix=$installDIR
make -j $n
make install
# with environment set do ldconfig
sudo ldconfig
echo "...done."

So finally to test the installation here is a simple example that just prints out some information from each processor.

#include <iostream>
#include <mpi.h>

int main(int argc, char *argv[])
    int numprocessors, rank, namelen;
    char processor_name[MPI_MAX_PROCESSOR_NAME];

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocessors);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Get_processor_name(processor_name, &namelen);

    if ( rank == 0 )
        std::cout << "Processor name: " << processor_name << "\n";
	std::cout << "master (" << rank << "/" << numprocessors << ")\n";
    } else {
        std::cout << "slave  (" << rank << "/" << numprocessors << ")\n";
   return 0;

When compiling C++ with Open MPI you can use the executable wrapper, mpic++, which makes compiling much easier. On execution of your script you need to call mpirun where you can specify the number of processors via the -np flag. The output is as follows running from my local machine. Dont worry about the ordering in which these are spit back at you.

matt@matt-W250ENQ-W270ENQ:$ mpic++ -W -Wall test.cpp -o test.o
matt@matt-W250ENQ-W270ENQ:$ mpirun -np 4 test.o
Slave   (3/4)
Processor name; matt-W!112332-NZ10
Master  (0/4)
Slave   (1/4)
Slave   (2/4)

The thing I would like to come onto will be Boost.MPI. This is a very nice interface to the MPI framework and also allows the use of prerequisite parallel graph libraries, which have been well developed and implemented. So the next blog will be about installing Boost which alone has a vast amount to offer, followed by some examples of the Boost.MPI framework in action. The thing that is really clever about this is the Boost.Serialization architecture which allows you to send more complex data structures such as user defined classes via the MPI framework, so you can make “almost” anything parallel.