DISTRIBUTED SYSTEM
CHAPTER 10 : CASE STUDY
LAB WORK SOLUTION- DISTRIBUTED SYSTEM
DISTRIBUTED SYSTEM -BCA -ALL SLIDES
MCQ- DISTRIBUTED SYSTEM

Message Passing Interface 

The Message Passing Interface (MPI) is a standardized and portable message-passing system designed to function on a wide variety of parallel computing architectures. MPI is widely used for communication in distributed systems, especially in high-performance computing (HPC) environments. It provides a set of library routines that can be used to implement parallel algorithms and exchange information between processes.

Process Model:

  • Communicators: MPI processes are organized into groups, and communicators are used to define the scope of communication. The default communicator, MPI_COMM_WORLD, includes all processes.
  • Ranks: Each process in an MPI program is assigned a unique identifier called a rank, which is used to specify the source and destination of messages.

MPI Functions

  1. Initialization and Finalization:

    • MPI_Init: Initializes the MPI environment.
    • MPI_Finalize: Cleans up the MPI environment.
  2. Point-to-Point Communication:

    • MPI_Send: Sends a message to a specified process.
    • MPI_Recv: Receives a message from a specified process.
    • MPI_Isend: Initiates a non-blocking send operation.
    • MPI_Irecv: Initiates a non-blocking receive operation.
  3. Collective Communication:

    • MPI_Bcast: Broadcasts a message from one process to all other processes.
    • MPI_Scatter: Distributes distinct chunks of data from one process to all processes.
    • MPI_Gather: Gathers distinct chunks of data from all processes to one process.
    • MPI_Allgather: Gathers data from all processes and distributes it to all processes.
    • MPI_Reduce: Reduces values from all processes to a single value using a specified operation.
    • MPI_Allreduce: Similar to MPI_Reduce, but the result is distributed to all processes.

Advantages of MPI

  1. Portability: MPI is platform-independent and can run on various architectures, from laptops to supercomputers.
  2. Scalability: Designed to scale efficiently on large numbers of processors.
  3. Flexibility: Supports a wide range of communication patterns and operations.
  4. Performance: Optimized for high performance on distributed systems.

Application

  • Scientific Computing: Simulations and numerical computations that require high-performance parallel processing.
  • Data Analysis: Large-scale data processing and analytics, particularly in environments like HPC clusters.
  • Weather Modeling: Running complex weather simulations that involve massive amounts of data and computations.