[antlr-interest] help: using antlr

Smiley 4321 ssmile03 at gmail.com
Sat Feb 27 11:38:45 PST 2010


Hello Nick,

Below is the mpi parallel program as an example -

-----
#include <stdio.h>
#include <mpi.h>

int main(int argc, char *argv[] )
{
 int numprocs, namelength, rank;
 char processor_name[MPI_MAX_PROCESSOR_NAME];

MPI_Init(&argc, &argv);
MPI_Comm_size (MPI_COMMON_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_processor_name(processor_name, &namelength);

printf("Process %d, on %s out of %d\n", rank, processor_name, numprocs):

MPI_Finalize();
}
-----

By referring to below definition of MPI environment/API calls we can
understand the purpose of MPI header file call and each MPI API calls
needed for above code -

*mpi.h -* says about header file used for parallel programming
API/environment calls. MPI is the only message passing library which can be
considered a standard for processes/threads to communicate with other node
processes/threads and leads to distributed memory parallel programming
model. This mpi.h header file contains numerous C API calls e.g MPI_Init,
MPI_Comm_size, MPI_Get_processor, MPI_Finalize, etc and it is required for
programs/routines which make MPI library calls.

Above MPI environment/API used are described below.

 *MPI_Init *- Initializes the MPI execution environment. This function is
called in every MPI program, must be called before any other MPI functions
and must be called only once in an MPI program. For C programs, MPI_Init may
be used to pass the command line arguments to all processes -* * *MPI_Init
(&argc,&argv)  *

 *MPI_Comm_size* - Determines the number of processes in the group
associated with a communicator. Generally used within the communicator
MPI_COMM_WORLD to determine the number of processes being used by the
application. *MPI_Comm_size (*MPI_COMMON_WORLD, &numprocs*)   *

*MPI_Comm_rank*  - Determines the rank of the calling process within the
communicator. Initially, each process will be assigned a unique integer rank
between 0 and number of processors - 1. This rank is often referred to as a
task ID.
*MPI_Comm_rank (*MPI_COMM_WORLD, &rank*) *

 *MPI_Get_processor_name*  - Returns the processor name. Also returns the
length of the name.  *MPI_Get_processor_name (*processor_name, &namelength*)
*

 *MPI_Finalize* - Terminates the MPI execution environment. This function is
used as the last MPI routine called in every MPI program - *MPI_Finalize ()
*
 **  Hope it explains.   -- Regards. Smiley
-------------
On Fri, Feb 26, 2010 at 3:48 PM, Nick Vlassopoulos
<nvlassopoulos at gmail.com>wrote:

> Hello Smiley,
>
>  On Fri, Feb 26, 2010 at 10:51 AM, Smiley 4321 <ssmile03 at gmail.com> wrote:
>
>> Peter,
>>
>> I did check ANTLR documentation & it's FAQ, also performed googlian for
>> ANTLR MPI support.
>>
>> The code as written, looking if it can be understood by ANTLR by
>> performing
>> verification for the given code.
>>
>> The task is, to understand and implement a techniques where few lines of
>> MPI
>> C level parallel programming code can be understood by ANTLR.
>>
>>
> So, your task would be to parse c code and have a sort of (one or more)
> predefined symbol table(s)
> with MPI headers / functions, so that the parser figures out if a program
> uses MPI or not (which is
> slightly tricky if someone names a set of functions MPI_* on purpose)?
>
> Maybe you can explain a little bit more what the "understood" means? I.e.
> is it just to answer a "yes" or "no"
> if the program contains MPI headers/library calls?
>
>
>> I am looking to know the techniques and support for mpi programming
>> extended
>>
>> by antlr.
>>
>> Hope it clear now to understand.
>>
>> ---regards.
>>
>
>
> Nikos
>


More information about the antlr-interest mailing list