نمونه سورس های MPI (پردازش موازی)

توسط: مدیر | 4 مارس 2016 | درس پردازش موازی | 6,663 نمایش

http://www.cse.iitd.ernet.in/~dheerajb/MPI/Document/hos_cont.html

۱٫۳ List of MPI Programs in FORTRAN and C

Example 1 :  MPI program to print Hello World
(Download source code ; hello_world.c / hello_world.f) Example 2 :  MPI program to find sum of n integers using MPI point-to-point blocking communication library calls
(Download source code ; sum_pt_to_pt.c / sum_pt_to_pt.f) Example 3 :  MPI program to find sum of n integers on parallel computer in which processors are arranged in linear array topology using MPI point-to-point blockingcommunication library calls
(Download source code ; linear_topology.c / linear_topology.f) Example 4 :  MPI program to find sum of n integers on parallel computer in which processors are arranged in ring topology using MPI point-to-point blocking communication library calls
(Download source code ; ring_topology.c / ring_topology.f) Example 5 :  MPI program to find sum of n integers on parallel computer in which processors are arranged in binary tree topology (associative fan-in rule) using MPI point-to-point blocking communication library calls
(Download source code ; fan_in_blocking.c / fan_in_blocking.f) Example 6 :  MPI program to find sum of n integers on parallel computer in which processors are arranged in binary tree topology (associative fan-in rule) using MPI point-to-point non-blocking communication library calls
(Download source code ; fan_in_nonblocking.c / fan_in_nonblocking.f) Example 7 :  MPI program to compute the value of PI by Numerical Integration using MPI point-to-point library calls
(Download source code ; pie_pt_to_pt.c / pie_pt_to_pt.f)
 
Example 8 :  MPI program to scatter n integers using MPI collective communication library calls
(Download source code ; scatter.c / scatter.f)
(Download input file ; sdata.inp Example 9 :  MPI program to gather n integers from p process and make the resultant gathered data (np) available on every process using collective communication library calls
(Download source code ; allgather.c / allgather.f)
(Download input files ; gdata0, gdata1, gdata2, gdata3,
                                gdata4, gdata5, gdata6, gdata7  or gdata.tar Example 10 :  MPI program to find sum of n integers using MPI collective communication and computation library calls
(Download source code ; reduce.c / reduce.f) Example 11 :  MPI program to compute value of PI by Numerical Integration using MPI collective communication library calls
(Download source code ; pie_collective.c / pie_collective.f)
  
Example 12 :  MPI program to construct a communicator consisting of group of diagonal processes in a square grid of processes using MPI groups library calls
(Download source code ; diag_comm.c / diag_comm.f)

Example 13 :  MPI program to compute dot product of two vectors using block-striped partitioning with uniform data distribution
(Download source code ; vv_mult_blkstp_unf.c /  vv_mult_blkstp_unf.f)
(Download input files ; vdata1.inp and vdata2.inp
Example 14 : MPI program to compute dot product of two vectors using block-striped partitioning with non-uniform data distribution
(Download source code ; vv_mult_blkstp_nonunf.c /  vv_mult_blkstp_nonunf.f)
                               (Download input files ; vdata1.inp and vdata2.inp
Example 15 : MPI program to compute dot product of two vectors using block -striped partitioning with cyclic data distribution
(Download source code ; vv_mult_blk_cyclic.c / vv_mult_blk_cyclic.f)
                               (Download input files ; vdata1.inp and vdata2.inp
Example 16 : MPI program to compute infinity norm of a matrix using block -striped partitioning and uniform data distribution
(Download source code ; mat_infnorm_blkstp.c / mat_infnorm_blkstp.f)
                                (Download input files ; infndata.inp )
Example 17 : MPI program to compute the Matrix and Vector Multiplication using self-scheduling algorithm
                               (Download source code ; mv_mult_master_sschd.c and   mv_mult_slave_sschd.c                                 /    mv_mult_master_sschd.f and  mv_mult_slave_sschd.f
                               (Download input files ; mdata.inp and vdata.inp 
Example 18 : MPI program to compute the Matrix and Vector Multiplication using block-striped row-wise partitioning with uniform data distribution
                               (Download source code ; mv_mult_blkstp.c / mv_mult_blkstp.f
                               (Download input files ; mdata.inp and vdata.inp 
Example 19 : MPI program to compute Matrix and Vector Multiplication using block checkerboard partitioning
                                (Download source code ; mv_mult_checkerboard.c / mv_mult_checkerboard.f
                                (Download input files ; mdata.inp and vdata.inp 
Example 20 : MPI program to compute Matrix and Matrix Multiplication using self-scheduling algorithm
                               (Download source code ; mm_mult_master_sschd.c and  mm_mult_slave_sschd.c
                                  mm_mult_master_sschd.f and  mm_mult_slave_sschd.f
                               (Download input files ; mdata1.inp and mdata2.inp 
Example 21 : MPI program to compute Matrix and Matrix Multiplication using block checkerboard partitioning and MPI Cartesian topology
                                (MPI Cartesian topology) 
                               (Download source code ; mm_mult_cartesian.c / mm_mult_cartesian.f
                               (Download input files ; mdata1.inp and mdata2.inp                         
Example 22 : MPI program to compute Matrix and Matrix Multiplication using block checkerboard partitioning and Cannon Algorithm
                                  (Cannon Algorithm) 
                                 (Download source code ; mm_mult_cannon.c
                                 (Download input files ; mdata1.inp and mdata2.inp 
Example 23 : MPI program to compute Matrix and Matrix Multiplication using block checkerboard partitioning and Fox Algorithm
                                  (Fox Algorithm) 
                                  (Download source code ; mm_mult_fox.c
                                  (Download input files ; mdata1.inp and mdata2.inp                            
Example 24 : MPI Parallel algorithm for solution of matrix system of linear equations by Jacobi method
(Download source code ; jacobi.c / jacobi.f)
(Download input files ; mdatjac.inp and vdatjac.inp Example 25 : MPI program for solution of matrix system of linear equations by Conjugate Gradient method
(Download source code ; congrad.c / congrad.f)
(Download input files ; mdatcg.inp and vdatcg.inp Example 26 : MPI program for solution of matrix system of linear equations A x = b by Gaussian Elimination method
(Download source code ; gauss_elimination.c / gauss_elimination.f)
(Download input files ; mdatgaus.inp and vdatgaus.inp Example 27 : MPI program for Sparse Matrix and Vector Multiplication using block-striped partitioning
(Download source code tar file ; sparse_matvect_c.tar  / sparse_matvect_fort.tar)
(Download input files ; mdat_sparse.inp and vdat_sparse.inp
Example 28 : MPI program for sorting n integers using sample sort
(Download source code ; samplesort.c          Example 29 : MPI program for solution of PDE (Poisson Equation) by finite difference method
(Download source code tar file;   poisson_fort.tar)

نمونه سورس MPI، محاسبه جمع از ۱ تا عدد N (تعداد پردازنده ورودی)

/*
**********************************************************************

Example 4 ( ring_topology.c )

Objective : To find sum of ‘n’ integers on ‘p’ processors using
Point-to-Point communication library calls and ring
topology.
This example demonstrates the use of
MPI_Init
MPI_Comm_rank
MPI_Comm_size
MPI_Send
MPI_Recv
MPI_Finalize

Input : Automatic input generation
The rank of each process is input on each process.

Output : Process with Rank 0 should print the sum of ‘n’ values

Necessary Condition : Number of Processes should be less than
or equal to 8.

***********************************************************************
*/

#include <stdio.h>
#include “mpi.h”

int main(int argc, char *argv[])
{
int MyRank, Numprocs, Root = 0;
int value, sum = 0;
int Source, Source_tag;
int Destination, Destination_tag;
MPI_Status status;

/* Initialize MPI */
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &Numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &MyRank);

if (MyRank == Root){
Destination = MyRank + 1;
Destination_tag = 0;
MPI_Send(&MyRank, 1, MPI_INT, Destination, Destination_tag,MPI_COMM_WORLD);
}
else{
if (MyRank<Numprocs – 1){
Source = MyRank – 1;
Source_tag = 0;

MPI_Recv(&value, 1, MPI_INT, Source, Source_tag,MPI_COMM_WORLD, &status);
sum = MyRank + value;
Destination = MyRank + 1;
Destination_tag = 0;
MPI_Send(&sum, 1, MPI_INT, Destination, Destination_tag,MPI_COMM_WORLD);
}
else{
Source = MyRank – 1;
Source_tag = 0;
MPI_Recv(&value, 1, MPI_INT, Source, Source_tag,MPI_COMM_WORLD, &status);
sum = MyRank + value;
}
}

if (MyRank == Root)
{
Source = Numprocs – 1;
Source_tag = 0;
MPI_Recv(&sum, 1, MPI_INT, Source, Source_tag,MPI_COMM_WORLD, &status);
printf(“MyRank %d Final SUM %d\n”, MyRank, sum);
}

if (MyRank == (Numprocs – 1)){
Destination = 0;
Destination_tag = 0;
MPI_Send(&sum, 1, MPI_INT, Destination, Destination_tag,MPI_COMM_WORLD);
}

MPI_Finalize();

}

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *

این سایت از اکیسمت برای کاهش هرزنامه استفاده می کند. بیاموزید که چگونه اطلاعات دیدگاه های شما پردازش می‌شوند.

آمار