© Ashwin Pajankar 2017

Ashwin Pajankar, Raspberry Pi Supercomputing and Scientific Programming, 10.1007/978-1-4842-2878-4_5

5. Message Passing Interface

Ashwin Pajankar

(1)Nashik, Maharashtra, India

In the last chapter, we learned the history and philosophy of supercomputers. We also learned important concepts related to supercomputing.

In this short chapter, we will get started with installing necessary packages and libraries on a Raspberry Pi. We will install MPI4PY, which is a Python library for MPI. Finally, we will install the utility nmap for node discovery.

Message Passing Interface

The Message Passing Interface Standard (MPI) is a message passing library standard based on the recommendations of the MPI Forum. The MPI Forum has over 40 participating organizations in the USA and Europe. The goal of the Message Passing Interface is to define a portable, efficient, and flexible standard for message passing that will be widely used for writing a wide variety of message passing programs. MPI is the first vendor-independent message passing library standard. The advantages of developing message passing programs using the MPI standard are portability, efficiency, and flexibility. Although MPI is not an IEEE or ISO standard, it has become the industry standard for writing message passing programs for a variety of platforms like High Performance Computing (HPC), parallel computers, clusters, and distributed systems. The MPI standard defines the syntax and semantics of library routines for writing portable message-passing programs in C, C++, and Fortran.

A few important facts related to MPI are the following:

  • MPI is a specification for libraries. MPI itself is not a library.

  • The goal of MPI is that the message passing standard should be practical, portable, efficient, and flexible.

  • Actual MPI libraries differ a bit depending on how the MPI standard is implemented.

  • The MPI standard has gone through several revisions. The most recent version is MPI-3.2.

Note

Explore the MPI Forum’s homepage ( www.mpi-forum.org ) and the MPI standard documentation page ( www.mpi-forum.org/docs/docs.html ) for more information on the MPI Forum and standard.

History and Evolution of the MPI Standard

On April 29–30 of 1992, a workshop on Standards for Message Passing in a Distributed Memory Environment was held in Williamsburg, Virginia. The basic features essential to a standard message passing interface were discussed and a working group to continue the standardization process was established. From there on, the work on MPI continued and the working group met regularly. The draft MPI standard was presented at the Supercomputing ‘93 conference in November 1993. After a period of public comments, which resulted in some changes in MPI standards, version 1.0 of MPI was released in June 1994. These meetings and the email discussion together led to the formation of the MPI Forum. The MPI standardization efforts involved about 80 people from 40 organizations in the United States and Europe. As of now, the latest version of MPI is MPI-3.2 which we will use for building the cluster.

Features of MPI

MPI is optimized for distributed systems with distributed memory and a network that connects all the nodes as depicted in Figure 5-1.

A447085_1_En_5_Fig1_HTML.gif
Figure 5-1. Distributed memory system

The following are the features of the Message Passing Interface:

  • Simplicity: The basics of the paradigm in MPI are traditional communication operations.

  • Generality: It can be implemented on most systems built on parallel architectures.

  • Performance: The implementation can match the speed of the underlying hardware.

  • Scalability: The same program can be deployed on larger systems without making any changes to it.

We will study more details of MPI paradigms when we start learning how to code with MPI4PY.

Implementations of MPI

As we have seen that MPI is not a library but a standard for development of message-passing libraries, there are several implementations of MPI. The following are the most popular implementations :

  • MPICH

  • MP-MPICH (MP stands for multi-platform)

  • winmpich

  • MPI BIP

  • HP’s MPI

  • IBM’s MPI

  • SGI’s MPI

  • STAMPI

  • OpenMPI

MPI4PY

MPI4PY stands for MPI for Python. MPI for Python provides MPI bindings for Python. This allows any Python program to use a multiple-processor configuration computer. This package is built on top of the MPI-1/2/3 specifications. It provides an object-oriented interface for parallel programming in Python. It supports point-to-point (send and receive) and collective (broadcast, scatter, and gather) communications for any Python object.

Figure 5-2 depicts the overview of MPI4PY.

A447085_1_En_5_Fig2_HTML.gif
Figure 5-2. Philosophy of MPI4PY

Why Use the Python, MPI, and MPI4PY Combination ?

Python is one of the three most-used programming languages in HPC (High Performance Computing). The other two are C and FORTRAN. As we have seen earlier, Python syntax is easy to understand and learn. MPI is the de facto standard for HPC and parallel programming. It is well established and has been there since around 1994 (over 20 years). MPI4PY is a well-regarded, clean, and efficient implementation of MPI for Python. It covers most of the MPI-2 standard. That’s why we should use Python 3 with MPI4PY on the Raspberry Pi for parallel programming.

Installing MPI4PY for Python3 on Raspbian

Installing MPI4PY for Python 3 on Raspbian is very simple. The user just has to run the following command in lxterminal:

sudo apt-get install python3-mpi4py -y

It will take a few minutes to install MPI4PY for Python 3.

To check if it is installed, run the following command:

mpirun hostname

It should print the hostname raspberrypi as the output.

Run the following command to launch multiple processes:

mpirun -np 3 hostanme

The output is as follows:

raspberrypi
raspberrypi
raspberrypi

We can also run the following command to check the MPI version installed on the system:

mpirun -V

This way we can install, run, and verify MPI4PY.

Note

Visit the mpirun manual page ( www.open-mpi.org/doc/v1.8/man1/mpirun.1.php ) for more details. In the later part of the book, we will use mpirun extensively with Python 3, and there we will study it in detail.

Installing nmap

nmap is a network security scanner. We are going to use it to discover the IP addresses of our Pi. We will use it in the next chapter. For now, just install nmap by running the following command:

sudo apt-get install nmap

Conclusion

In this chapter, we learned to prepare a Pi for supercomputing by installing MPI4PY. In the next chapter, we will build a supercomputer by connecting multiple Pis together.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.125.139