Grid Computing

FIT5164: Week 7

I am still struggling to follow the lecture material in Grid Computing, but none-the-less learning a lot and finding my feet in the more practical tutorials.

Week 7’s lecture discussed:

  • Grid Resources
  • Resource Specification
  • Grid Processing
  • Grid Application Hosting

ALthough I don't have a solid understanding of web service, I can see this architecture makes sense

In the tutorial we covered the MPI libraries for C and C++, this was followed up the next tutorial where we used WS_GRAM Job Management Client: globusrun-ws.

An example of a test script for MPI:



#include "mpi.h"

main(int argc, char* argv[]) 


    int         my_rank;       /* rank of process      */

    int         p;             /* number of processes  */

    int         source;        /* rank of sender       */

    int         dest;          /* rank of receiver     */

    int         tag = 0;       /* tag for messages     */

    char        message[100];  /* storage for message  */

    MPI_Status  status;        /* return status for    */

                               /* receive              */

    char        pname[100];    /* Processor name */

    int         plen;          /* Processor name length */

    /* Start up MPI */

    MPI_Init(&argc, &argv);

                /* Find out process rank  */

    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); 

    /* Find out number of processes */

    MPI_Comm_size(MPI_COMM_WORLD, &p);

    /* Find out my node name */ 


    if (my_rank != 0) 


        /* Create message */

        sprintf(message, "%s: Greetings from process %d!", pname, my_rank);

        dest = 0;

        /* Use strlen+1 so that '\0' gets transmitted */

        MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);

    } else 

    { /* my_rank == 0 */

        printf("%s: Greetings from process %d!\n", pname, my_rank);

        for (source = 1; source < p; source++) 


            MPI_Recv(message, 100, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status);

            printf("%s %s\n",  pname, message);



We also explored distributed resource management systems (ie: Sun Grid Engine). I still however do not understand how RPC[Remote Procedure Call], MPI[Message Passing Interface] and DRM[Distributed Resource Management] interact.

Types of cluster applications:

  • Sequential and totally uncoupled parallel applications
  • Parallel applications (using message passing or shared memory)
  • Distributed applications (ie: P2P)

I hope when we work more with some grid enabled applications, the grid resources management mechanisms become clearer to me, both as a user and implementer.

Leave a Reply

Your email address will not be published. Required fields are marked *