On a distributed memory machine a parallel program consists of multiple processes running on the different processing elements (each processing element consists of a processor and its local memory, but as a shorthand term sometimes they are just refered to as "processors"). The processes must coordinate their activities and communicate data via messages of some sort.
send(int* buffer; /* pointer to data */ int count; /* number of items */ int dest) /* who to send to */and
recv(int* buffer; /* pointer to data */ int count; /* number of items */ int src) /* who to recv from */In reality, message passing routines require more arguments as we will see with MPI. Now suppose that processes are assigned unique ID numbers from 0 to p-1; these are often called the ranks of the processes. If process 0 sends an integer k to process 1, it would call
send(buffer=&k, count=1, dest=1);and process 1 executes
recv(buffer=&i, count=1, src=0);Note that it is not necessary that process 1 receives k into a local variable with the same name.
Although each processor needs to execute different code, it is not necessary to write two different programs. Instead a single code branches based on the executing process's rank:
if (my_rank == 0) send(&k, 1, 1); else if (my_rank == 1) recv(&i, 1, 1);This is the basis of SPMD (single program, multiple data) programming. Even this simple example of message passing brings out two questions.