Back to index

salome-med  6.5.0
Classes | Public Member Functions | Protected Member Functions | Private Attributes
ParaMEDMEM::MPIAccessDEC Class Reference

#include <MPIAccessDEC.hxx>

Collaboration diagram for ParaMEDMEM::MPIAccessDEC:
Collaboration graph
[legend]

List of all members.

Classes

struct  SendBuffStruct

Public Member Functions

 MPIAccessDEC (const ProcessorGroup &local_group, const ProcessorGroup &distant_group, bool Asynchronous=true)
 This constructor creates an MPIAccessDEC which has source_group as a working side and target_group as an idle side.
virtual ~MPIAccessDEC ()
MPIAccessgetMPIAccess ()
const MPI_Comm * getComm ()
void asynchronous (bool Asynchronous=true)
void setTimeInterpolator (TimeInterpolationMethod anInterp, double InterpPrecision=0, int n_step_before=1, int nStepAfter=1)
void setTime (double t)
void setTime (double t, double dt)
bool outOfTime (int target)
int send (void *sendbuf, int sendcount, MPI_Datatype sendtype, int target)
int recv (void *recvbuf, int recvcount, MPI_Datatype recvtype, int target)
int recv (void *recvbuf, int recvcount, MPI_Datatype recvtype, int target, int &RecvRequestId, bool Asynchronous=false)
int sendRecv (void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int target)
int allToAll (void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype)
 Send sendcount datas from sendbuf[offset] with type sendtype to all targets of IntraCommunicator Receive recvcount datas to recvbuf[offset] with type recvtype from all targets of IntraCommunicator.
int allToAllv (void *sendbuf, int *sendcounts, int *sdispls, MPI_Datatype sendtype, void *recvbuf, int *recvcounts, int *rdispls, MPI_Datatype recvtype)
 Send sendcounts[target] datas from sendbuf[sdispls[target]] with type sendtype to all targets of IntraCommunicator Receive recvcounts[target] datas to recvbuf[rdispls[target]] with type recvtype from all targets of IntraCommunicator.
int allToAllTime (void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype)
 Send a TimeMessage to all targets of IntraCommunicator Receive the TimeMessages from targets of IntraCommunicator if necessary.
int allToAllvTime (void *sendbuf, int *sendcounts, int *sdispls, MPI_Datatype sendtype, void *recvbuf, int *recvcounts, int *rdispls, MPI_Datatype recvtype)
int checkTime (int recvcount, MPI_Datatype recvtype, int target, bool UntilEnd)
int checkSent (bool WithWait=false)
int checkFinalSent ()
int checkFinalRecv ()

Protected Member Functions

int send (void *sendbuf, int sendcount, int sendoffset, MPI_Datatype sendtype, int target, int &SendRequestId)
 Send sendcount datas from sendbuf[offset] with type sendtype to target of IntraCommunicator (Internal Protected method)
int recv (void *recvbuf, int recvcount, int recvoffset, MPI_Datatype recvtype, int target, int &RecvRequestId)
 Receive recvcount datas to recvbuf[offset] with type recvtype from target of IntraCommunicator (Internal Protected method)
int sendRecv (void *sendbuf, int sendcount, int sendoffset, MPI_Datatype sendtype, void *recvbuf, int recvcount, int recvoffset, MPI_Datatype recvtype, int target, int &SendRequestId, int &RecvRequestId)
 Send sendcount datas from sendbuf[offset] with type sendtype to target of IntraCommunicator Receive recvcount datas to recvbuf[offset] with type recvtype from target of IntraCommunicator (Internal Protected method)

Private Attributes

bool _asynchronous
MPIProcessorGroup_MPI_union_group
TimeInterpolator_time_interpolator
int _n_step_before
int _n_step_after
int _my_rank
int _group_size
MPIAccess_MPI_access
double _t
double _dt
std::vector< std::vector
< TimeMessage > > * 
_time_messages
std::vector< bool > * _out_of_time
std::vector< int > * _data_messages_recv_count
std::vector< MPI_Datatype > * _data_messages_type
std::vector< std::vector< void * > > * _data_messages
std::map< int, SendBuffStruct * > * _map_of_send_buffers

Detailed Description

Definition at line 32 of file MPIAccessDEC.hxx.


Class Documentation

struct ParaMEDMEM::MPIAccessDEC::SendBuffStruct

Definition at line 104 of file MPIAccessDEC.hxx.

Class Members
int Counter
MPI_Datatype DataType
void * SendBuffer

Constructor & Destructor Documentation

ParaMEDMEM::MPIAccessDEC::MPIAccessDEC ( const ProcessorGroup source_group,
const ProcessorGroup target_group,
bool  Asynchronous = true 
)

This constructor creates an MPIAccessDEC which has source_group as a working side and target_group as an idle side.

The constructor must be called synchronously on all processors of both processor groups.

Parameters:
source_groupworking side ProcessorGroup
target_grouplazy side ProcessorGroup
AsynchronousCommunication mode (default asynchronous)
nStepBeforeNumber of Time step needed for the interpolation before current time
nStepAfterNumber of Time step needed for the interpolation after current time

Definition at line 42 of file MPIAccessDEC.cxx.

  {

    ProcessorGroup * union_group = source_group.fuse(target_group) ;  
    int i ;
    std::set<int> procs;
    for ( i = 0 ; i < union_group->size() ; i++ )
      {
        procs.insert(i) ;
      }
    MPIProcessorGroup *mpilg = (MPIProcessorGroup *)&source_group;
    _MPI_union_group = new ParaMEDMEM::MPIProcessorGroup( union_group->getCommInterface(),procs,mpilg->getWorldComm());
    delete union_group ;
    _my_rank = _MPI_union_group->myRank() ;
    _group_size = _MPI_union_group->size() ;
    _MPI_access = new MPIAccess( _MPI_union_group ) ;
    _asynchronous = Asynchronous ;
    _time_messages = new vector< vector< TimeMessage > > ;
    _time_messages->resize( _group_size ) ;
    _out_of_time = new vector< bool > ;
    _out_of_time->resize( _group_size ) ;
    _data_messages_recv_count = new vector< int > ;
    _data_messages_recv_count->resize( _group_size ) ;
    for ( i = 0 ; i < _group_size ; i++ )
      {
        (*_out_of_time)[i] = false ;
        (*_data_messages_recv_count)[i] = 0 ;
      }
    _data_messages_type = new vector< MPI_Datatype > ;
    _data_messages_type->resize( _group_size ) ;
    _data_messages = new vector< vector< void * > > ;
    _data_messages->resize( _group_size ) ;
    _time_interpolator = NULL ;
    _map_of_send_buffers = new map< int , SendBuffStruct * > ;
  }

Here is the call graph for this function:


Member Function Documentation

int ParaMEDMEM::MPIAccessDEC::allToAll ( void *  sendbuf,
int  sendcount,
MPI_Datatype  sendtype,
void *  recvbuf,
int  recvcount,
MPI_Datatype  recvtype 
)

Send sendcount datas from sendbuf[offset] with type sendtype to all targets of IntraCommunicator Receive recvcount datas to recvbuf[offset] with type recvtype from all targets of IntraCommunicator.

Definition at line 315 of file MPIAccessDEC.cxx.

  {
    if ( _time_interpolator )
      {
        return allToAllTime( sendbuf, sendcount, sendtype , recvbuf, recvcount, recvtype ) ;
      }
    int sts ;
    int target ;
    int sendoffset = 0 ;
    int recvoffset = 0 ;
    int SendRequestId ;
    int RecvRequestId ;

    //Free of SendBuffers 
    if ( _asynchronous )
      checkSent() ;

    //DoSend + DoRecv : SendRecv
    SendBuffStruct * aSendDataStruct = NULL ;
    if ( _asynchronous && sendbuf )
      {
        aSendDataStruct = new SendBuffStruct ;
        aSendDataStruct->SendBuffer = sendbuf ;
        aSendDataStruct->Counter = 0 ;
        aSendDataStruct->DataType = sendtype ;
      }
    for ( target = 0 ; target < _group_size ; target++ )
      {
        sts = sendRecv( sendbuf , sendcount , sendoffset , sendtype ,
                        recvbuf , recvcount , recvoffset , recvtype ,
                        target , SendRequestId , RecvRequestId ) ;
        if ( _asynchronous && sendbuf && sendcount )
          {
            aSendDataStruct->Counter += 1 ;
            (*_map_of_send_buffers)[ SendRequestId ] = aSendDataStruct ;
          }
        sendoffset += sendcount ;
        recvoffset += recvcount ;
      }
    if ( !_asynchronous && sendbuf )
      {
        if ( sendtype == MPI_INT )
          {
            delete [] (int *) sendbuf ;
          }
        else
          {
            delete [] (double *) sendbuf ;
          }
      }
    return sts ;
  }

Here is the caller graph for this function:

int ParaMEDMEM::MPIAccessDEC::allToAllTime ( void *  sendbuf,
int  sendcount,
MPI_Datatype  sendtype,
void *  recvbuf,
int  recvcount,
MPI_Datatype  recvtype 
)

Send a TimeMessage to all targets of IntraCommunicator Receive the TimeMessages from targets of IntraCommunicator if necessary.

Send sendcount datas from sendbuf[offset] with type sendtype to all targets of IntraCommunicator Returns recvcount datas to recvbuf[offset] with type recvtype after an interpolation with datas received from all targets of IntraCommunicator.

Definition at line 496 of file MPIAccessDEC.cxx.

  {
    int sts ;
    int target ;
    int sendoffset = 0 ;
    int SendTimeRequestId ;
    int SendDataRequestId ;

    if ( _time_interpolator == NULL )
      {
        return MPI_ERR_OTHER ;
      }

    //Free of SendBuffers 
    if ( _asynchronous )
      {
        checkSent() ;
      }

    //DoSend : Time + SendBuff
    SendBuffStruct * aSendTimeStruct = NULL ;
    SendBuffStruct * aSendDataStruct = NULL ;
    if ( sendbuf && sendcount )
      {
        TimeMessage * aSendTimeMessage = new TimeMessage ;
        if ( _asynchronous )
          {
            aSendTimeStruct = new SendBuffStruct ;
            aSendTimeStruct->SendBuffer = aSendTimeMessage ;
            aSendTimeStruct->Counter = 0 ;
            aSendTimeStruct->DataType = _MPI_access->timeType() ;
            aSendDataStruct = new SendBuffStruct ;
            aSendDataStruct->SendBuffer = sendbuf ;
            aSendDataStruct->Counter = 0 ;
            aSendDataStruct->DataType = sendtype ;
          }
        aSendTimeMessage->time = _t ;
        aSendTimeMessage->deltatime = _dt ;
        for ( target = 0 ; target < _group_size ; target++ )
          {
            sts = send( aSendTimeMessage , 1 , 0 , _MPI_access->timeType() , target ,
                        SendTimeRequestId ) ;
            sts = send( sendbuf , sendcount , sendoffset , sendtype , target , SendDataRequestId ) ;
            if ( _asynchronous )
              {
                aSendTimeStruct->Counter += 1 ;
                (*_map_of_send_buffers)[ SendTimeRequestId ] = aSendTimeStruct ;
                aSendDataStruct->Counter += 1 ;
                (*_map_of_send_buffers)[ SendDataRequestId ] = aSendDataStruct ;
              }
            sendoffset += sendcount ;
          }
        if ( !_asynchronous )
          {
            delete aSendTimeMessage ;
            if ( sendtype == MPI_INT )
              {
                delete [] (int *) sendbuf ;
              }
            else
              {
                delete [] (double *) sendbuf ;
              }
          }
      }

    //CheckTime + DoRecv + DoInterp
    if ( recvbuf && recvcount )
      {
        for ( target = 0 ; target < _group_size ; target++ )
          {
            int recvsize = recvcount*_MPI_access->extent( recvtype ) ;
            checkTime( recvcount , recvtype , target , false ) ;
            //===========================================================================
            //TODO : it is assumed actually that we have only 1 timestep before nad after
            //===========================================================================
            if ( _time_interpolator && (*_time_messages)[target][0].time != -1 )
              {
                if ( (*_out_of_time)[target] )
                  {
                    cout << " =====================================================" << endl
                         << "Recv" << _my_rank << " <-- target " << target << " t0 "
                         << (*_time_messages)[target][0].time << " < t1 "
                         << (*_time_messages)[target][1].time << " < t* " << _t << endl
                         << " =====================================================" << endl ;
                  }
                if ( recvtype == MPI_INT )
                  {
                    _time_interpolator->doInterp( (*_time_messages)[target][0].time,
                                                  (*_time_messages)[target][1].time, _t, recvcount ,
                                                  _n_step_before, _n_step_after,
                                                  (int **) &(*_data_messages)[target][0],
                                                  (int **) &(*_data_messages)[target][1],
                                                  &((int *)recvbuf)[target*recvcount] ) ;
                  }
                else
                  {
                    _time_interpolator->doInterp( (*_time_messages)[target][0].time,
                                                  (*_time_messages)[target][1].time, _t, recvcount ,
                                                  _n_step_before, _n_step_after,
                                                  (double **) &(*_data_messages)[target][0],
                                                  (double **) &(*_data_messages)[target][1],
                                                  &((double *)recvbuf)[target*recvcount] ) ;
                  }
              }
            else
              {
                char * buffdest = (char *) recvbuf ;
                char * buffsrc = (char *) (*_data_messages)[target][1] ;
                memcpy( &buffdest[target*recvsize] , buffsrc , recvsize ) ;
              }
          }
      }

    return sts ;
  }

Here is the caller graph for this function:

int ParaMEDMEM::MPIAccessDEC::allToAllv ( void *  sendbuf,
int *  sendcounts,
int *  sdispls,
MPI_Datatype  sendtype,
void *  recvbuf,
int *  recvcounts,
int *  rdispls,
MPI_Datatype  recvtype 
)

Send sendcounts[target] datas from sendbuf[sdispls[target]] with type sendtype to all targets of IntraCommunicator Receive recvcounts[target] datas to recvbuf[rdispls[target]] with type recvtype from all targets of IntraCommunicator.

Definition at line 374 of file MPIAccessDEC.cxx.

  {
    if ( _time_interpolator )
      {
        return allToAllvTime( sendbuf, sendcounts, sdispls, sendtype ,
                              recvbuf, recvcounts, rdispls, recvtype ) ;
      }
    int sts ;
    int target ;
    int SendRequestId ;
    int RecvRequestId ;

    //Free of SendBuffers 
    if ( _asynchronous )
      {
        checkSent() ;
      }

    //DoSend + DoRecv : SendRecv
    SendBuffStruct * aSendDataStruct = NULL ;
    if ( _asynchronous && sendbuf )
      {
        aSendDataStruct = new SendBuffStruct ;
        aSendDataStruct->SendBuffer = sendbuf ;
        aSendDataStruct->Counter = 0 ;
        aSendDataStruct->DataType = sendtype ;
      }
    for ( target = 0 ; target < _group_size ; target++ )
      {
        if ( sendcounts[target] || recvcounts[target] )
          {
            sts = sendRecv( sendbuf , sendcounts[target] , sdispls[target] , sendtype ,
                            recvbuf , recvcounts[target] , rdispls[target] , recvtype ,
                            target , SendRequestId , RecvRequestId ) ;
            if ( _asynchronous && sendbuf && sendcounts[target])
              {
                aSendDataStruct->Counter += 1 ;
                (*_map_of_send_buffers)[ SendRequestId ] = aSendDataStruct ;
              }
          }
      }
    if ( !_asynchronous && sendbuf )
      {
        if ( sendtype == MPI_INT )
          {
            delete [] (int *) sendbuf ;
          }
        else
          {
            delete [] (double *) sendbuf ;
          }
      }
    return sts ;
  }

Here is the caller graph for this function:

int ParaMEDMEM::MPIAccessDEC::allToAllvTime ( void *  sendbuf,
int *  sendcounts,
int *  sdispls,
MPI_Datatype  sendtype,
void *  recvbuf,
int *  recvcounts,
int *  rdispls,
MPI_Datatype  recvtype 
)

Definition at line 614 of file MPIAccessDEC.cxx.

  {
    int sts ;
    int target ;
    int SendTimeRequestId ;
    int SendDataRequestId ;

    if ( _time_interpolator == NULL )
      {
        return MPI_ERR_OTHER ;
      }

    //Free of SendBuffers 
    if ( _asynchronous )
      {
        checkSent() ;
      }

    /*
      . DoSend :
      + We create a TimeMessage (look at that structure in MPI_Access).
      + If we are in asynchronous mode, we create two structures SendBuffStruct
      aSendTimeStruct and aSendDataStruct that we fill.
      + We fill the structure aSendTimeMessage with time/deltatime of
      the current process. "deltatime" must be nul if it is the last step of
      Time.
      + After that for each "target", we Send the TimeMessage and the part
      of sendbuf corresponding to that target.
      + If we are in asynchronous mode, we increment the counter and we add
      aSendTimeStruct and aSendDataStruct to _MapOfSendBuffers with the
      identifiers SendTimeRequestId and SendDataRequestId returned by
      MPI_Access->Send(...).
      + And if we are in synchronous mode we delete the SendMessages.
    */
    //DoSend : Time + SendBuff
    SendBuffStruct * aSendTimeStruct = NULL ;
    SendBuffStruct * aSendDataStruct = NULL ;
    if ( sendbuf )
      {
        TimeMessage * aSendTimeMessage = new TimeMessage ;
        if ( _asynchronous )
          {
            aSendTimeStruct = new SendBuffStruct ;
            aSendTimeStruct->SendBuffer = aSendTimeMessage ;
            aSendTimeStruct->Counter = 0 ;
            aSendTimeStruct->DataType = _MPI_access->timeType() ;
            aSendDataStruct = new SendBuffStruct ;
            aSendDataStruct->SendBuffer = sendbuf ;
            aSendDataStruct->Counter = 0 ;
            aSendDataStruct->DataType = sendtype ;
          }
        aSendTimeMessage->time = _t ;
        aSendTimeMessage->deltatime = _dt ;
        for ( target = 0 ; target < _group_size ; target++ )
          {
            if ( sendcounts[target] )
              {
                sts = send( aSendTimeMessage , 1 , 0 , _MPI_access->timeType() , target ,
                            SendTimeRequestId ) ;
                sts = send( sendbuf , sendcounts[target] , sdispls[target] , sendtype , target ,
                            SendDataRequestId ) ;
                if ( _asynchronous )
                  {
                    aSendTimeStruct->Counter += 1 ;
                    (*_map_of_send_buffers)[ SendTimeRequestId ] = aSendTimeStruct ;
                    aSendDataStruct->Counter += 1 ;
                    (*_map_of_send_buffers)[ SendDataRequestId ] = aSendDataStruct ;
                  }
              }
          }
        if ( !_asynchronous )
          {
            delete aSendTimeMessage ;
            if ( sendtype == MPI_INT )
              {
                delete [] (int *) sendbuf ;
              }
            else
              {
                delete [] (double *) sendbuf ;
              }
          }
      }

    /*
      . CheckTime + DoRecv + DoInterp
      + For each target we call CheckTime
      + If there is a TimeInterpolator and if the TimeMessage of the target
      is not the first, we call the interpolator which return its
      results in the part of the recv buffer corresponding to the "target".
      + If not, there is a copy of received datas for that first step of time
      in the part of the recv buffer corresponding to the "target".
    */
    //CheckTime + DoRecv + DoInterp
    if ( recvbuf )
      {
        for ( target = 0 ; target < _group_size ; target++ )
          {
            if ( recvcounts[target] )
              {
                int recvsize = recvcounts[target]*_MPI_access->extent( recvtype ) ;
                checkTime( recvcounts[target] , recvtype , target , false ) ;
                //===========================================================================
                //TODO : it is assumed actually that we have only 1 timestep before nad after
                //===========================================================================
                if ( _time_interpolator && (*_time_messages)[target][0].time != -1 )
                  {
                    if ( (*_out_of_time)[target] )
                      {
                        cout << " =====================================================" << endl
                             << "Recv" << _my_rank << " <-- target " << target << " t0 "
                             << (*_time_messages)[target][0].time << " < t1 "
                             << (*_time_messages)[target][1].time << " < t* " << _t << endl
                             << " =====================================================" << endl ;
                      }
                    if ( recvtype == MPI_INT )
                      {
                        _time_interpolator->doInterp( (*_time_messages)[target][0].time,
                                                      (*_time_messages)[target][1].time, _t,
                                                      recvcounts[target] , _n_step_before, _n_step_after,
                                                      (int **) &(*_data_messages)[target][0],
                                                      (int **) &(*_data_messages)[target][1],
                                                      &((int *)recvbuf)[rdispls[target]] ) ;
                      }
                    else
                      {
                        _time_interpolator->doInterp( (*_time_messages)[target][0].time,
                                                      (*_time_messages)[target][1].time, _t,
                                                      recvcounts[target] , _n_step_before, _n_step_after,
                                                      (double **) &(*_data_messages)[target][0],
                                                      (double **) &(*_data_messages)[target][1],
                                                      &((double *)recvbuf)[rdispls[target]] ) ;
                      }
                  }
                else
                  {
                    char * buffdest = (char *) recvbuf ;
                    char * buffsrc = (char *) (*_data_messages)[target][1] ;
                    memcpy( &buffdest[rdispls[target]*_MPI_access->extent( recvtype )] , buffsrc ,
                            recvsize ) ;
                  }
              }
          }
      }

    return sts ;
  }

Here is the caller graph for this function:

void ParaMEDMEM::MPIAccessDEC::asynchronous ( bool  Asynchronous = true) [inline]

Definition at line 40 of file MPIAccessDEC.hxx.

{ _asynchronous = Asynchronous; }

Definition at line 998 of file MPIAccessDEC.cxx.

  {
    int sts = MPI_SUCCESS ;
    if ( _time_interpolator )
      {
        int target ;
        for ( target = 0 ; target < _group_size ; target++ )
          {
            if ( (*_data_messages)[target][0] != NULL )
              {
                sts = checkTime( (*_data_messages_recv_count)[target] , (*_data_messages_type)[target] ,
                                 target , true ) ;
                if ( (*_data_messages_type)[target] == MPI_INT )
                  {
                    delete [] (int *) (*_data_messages)[target][0] ;
                  }
                else
                  {
                    delete [] (double *) (*_data_messages)[target][0] ;
                  }
                (*_data_messages)[target][0] = NULL ;
                if ( (*_data_messages)[target][1] != NULL )
                  {
                    if ( (*_data_messages_type)[target] == MPI_INT )
                      {
                        delete [] (int *) (*_data_messages)[target][1] ;
                      }
                    else
                      {
                        delete [] (double *) (*_data_messages)[target][1] ;
                      }
                    (*_data_messages)[target][1] = NULL ;
                  }
              }
          }
      }
    return sts ;
  }

Here is the caller graph for this function:

Definition at line 68 of file MPIAccessDEC.hxx.

{ return checkSent( true ); }

Here is the call graph for this function:

Here is the caller graph for this function:

int ParaMEDMEM::MPIAccessDEC::checkSent ( bool  WithWait = false)

Definition at line 892 of file MPIAccessDEC.cxx.

  {
    int sts = MPI_SUCCESS ;
    int flag = WithWait ;
    int size = _MPI_access->sendRequestIdsSize() ;
    int * ArrayOfSendRequests = new int[ size ] ;
    int nSendRequest = _MPI_access->sendRequestIds( size , ArrayOfSendRequests ) ;
    bool SendTrace = false ;
    int i ;
    for ( i = 0 ; i < nSendRequest ; i++ )
      {
        if ( WithWait )
          {
            cout << "CheckSent" << _my_rank << " " << i << "./" << nSendRequest
                 << " SendRequestId " << ArrayOfSendRequests[i] << " MPITarget "
                 << _MPI_access->MPITarget(ArrayOfSendRequests[i]) << " MPITag "
                 << _MPI_access->MPITag(ArrayOfSendRequests[i]) << " Wait :" << endl ;
            sts = _MPI_access->wait( ArrayOfSendRequests[i] ) ;
          }
        else
          {
            sts = _MPI_access->test( ArrayOfSendRequests[i] , flag ) ;
          }
        if ( flag )
          {
            _MPI_access->deleteRequest( ArrayOfSendRequests[i] ) ;
            if ( SendTrace )
              {
                cout << "CheckSent" << _my_rank << " " << i << "./" << nSendRequest
                     << " SendRequestId " << ArrayOfSendRequests[i]
                     << " flag " << flag
                     << " Counter " << (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->Counter
                     << " DataType " << (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->DataType
                     << endl ;
              }
            (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->Counter -= 1 ;
            if ( SendTrace )
              {
                if ( (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->DataType == 
                     _MPI_access->timeType() )
                  {
                    cout << "CheckTimeSent" << _my_rank << " Request " ;
                  }
                else
                  {
                    cout << "CheckDataSent" << _my_rank << " Request " ;
                  }
                cout << ArrayOfSendRequests[i]
                     << " _map_of_send_buffers->SendBuffer "
                     << (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->SendBuffer
                     << " Counter " << (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->Counter
                     << endl ;
              }
            if ( (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->Counter  == 0 )
              {
                if ( SendTrace )
                  {
                    cout << "CheckSent" << _my_rank << " SendRequestId " << ArrayOfSendRequests[i]
                         << " Counter " << (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->Counter
                         << " flag " << flag << " SendBuffer "
                         << (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->SendBuffer
                         << " deleted. Erase in _map_of_send_buffers :" << endl ;
                  }
                if ( (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->DataType ==
                     _MPI_access->timeType() )
                  {
                    delete (TimeMessage * ) (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->SendBuffer ;
                  }
                else
                  {
                    if ( (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->DataType == MPI_INT )
                      {
                        delete [] (int *) (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->SendBuffer ;
                      }
                    else
                      {
                        delete [] (double *) (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->SendBuffer ;
                      }
                  }
                delete (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ] ;
              }
            if ( SendTrace )
              {
                cout << "CheckSent" << _my_rank << " Erase in _map_of_send_buffers SendRequestId "
                     << ArrayOfSendRequests[i] << endl ;
              }
            (*_map_of_send_buffers).erase( ArrayOfSendRequests[i] ) ;
          }
        else if ( SendTrace )
          {
            cout << "CheckSent" << _my_rank << " " << i << "./" << nSendRequest
                 << " SendRequestId " << ArrayOfSendRequests[i]
                 << " flag " << flag
                 << " Counter " << (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->Counter
                 << " DataType " << (*_map_of_send_buffers)[ ArrayOfSendRequests[i] ]->DataType
                 << endl ;
          }
      }
    if ( SendTrace )
      {
        _MPI_access->check() ;
      }
    delete [] ArrayOfSendRequests ;
    return sts ;
  }

Here is the caller graph for this function:

int ParaMEDMEM::MPIAccessDEC::checkTime ( int  recvcount,
MPI_Datatype  recvtype,
int  target,
bool  UntilEnd 
)

Definition at line 795 of file MPIAccessDEC.cxx.

  {
    int sts = MPI_SUCCESS ;
    int RecvTimeRequestId ;
    int RecvDataRequestId ;
    //Pour l'instant on cherche _time_messages[target][0] < _t <= _time_messages[target][1]
    //===========================================================================
    //TODO : it is assumed actually that we have only 1 timestep before and after
    //       instead of _n_step_before and _n_step_after ...
    //===========================================================================
    (*_data_messages_recv_count)[target] = recvcount ;
    (*_data_messages_type)[target] = recvtype ;
    if ( (*_time_messages)[target][1].time == -1 )
      {
        (*_time_messages)[target][0] = (*_time_messages)[target][1] ;
        sts = recv( &(*_time_messages)[target][1] , 1 , _MPI_access->timeType() ,
                    target , RecvTimeRequestId ) ;
        (*_data_messages)[target][0] = (*_data_messages)[target][1] ;
        if ( recvtype == MPI_INT )
          {
            (*_data_messages)[target][1] = new int[recvcount] ;
          }
        else
          {
            (*_data_messages)[target][1] = new double[recvcount] ;
          }
        sts = recv( (*_data_messages)[target][1] , recvcount , recvtype , target ,
                    RecvDataRequestId ) ;
      }
    else
      {
        while ( ( _t > (*_time_messages)[target][1].time || UntilEnd ) &&
                (*_time_messages)[target][1].deltatime != 0 )
          {
            (*_time_messages)[target][0] = (*_time_messages)[target][1] ;
            sts = recv( &(*_time_messages)[target][1] , 1 , _MPI_access->timeType() ,
                        target , RecvTimeRequestId ) ;
            if ( UntilEnd )
              {
                cout << "CheckTime" << _my_rank << " TimeMessage target " << target
                     << " RecvTimeRequestId " << RecvTimeRequestId << " MPITag "
                     << _MPI_access->recvMPITag(target) << endl ;
              }
            if ( recvtype == MPI_INT )
              {
                delete [] (int *) (*_data_messages)[target][0] ;
              }
            else
              {
                delete [] (double *) (*_data_messages)[target][0] ;
              }
            (*_data_messages)[target][0] = (*_data_messages)[target][1] ;
            if ( recvtype == MPI_INT )
              {
                (*_data_messages)[target][1] = new int[recvcount] ;
              }
            else
              {
                (*_data_messages)[target][1] = new double[recvcount] ;
              }
            sts = recv( (*_data_messages)[target][1] , recvcount , recvtype , target ,
                        RecvDataRequestId ) ;
            if ( UntilEnd )
              {
                cout << "CheckTime" << _my_rank << " DataMessage target " << target
                     << " RecvDataRequestId " << RecvDataRequestId << " MPITag "
                     << _MPI_access->recvMPITag(target) << endl ;
              }
          }

        if ( _t > (*_time_messages)[target][0].time &&
             _t <= (*_time_messages)[target][1].time )
          {
          }
        else
          {
            (*_out_of_time)[target] = true ;
          }
      }
    return sts ;
  }
const MPI_Comm* ParaMEDMEM::MPIAccessDEC::getComm ( ) [inline]

Definition at line 39 of file MPIAccessDEC.hxx.

{ return _MPI_union_group->getComm(); }

Here is the call graph for this function:

Here is the caller graph for this function:

Definition at line 38 of file MPIAccessDEC.hxx.

{ return _MPI_access; }

Here is the caller graph for this function:

bool ParaMEDMEM::MPIAccessDEC::outOfTime ( int  target) [inline]

Definition at line 46 of file MPIAccessDEC.hxx.

{ return (*_out_of_time)[target]; }
int ParaMEDMEM::MPIAccessDEC::recv ( void *  recvbuf,
int  recvcount,
MPI_Datatype  recvtype,
int  target 
) [inline]

Definition at line 132 of file MPIAccessDEC.hxx.

  {
    int RecvRequestId;
    int sts;
    if ( _asynchronous )
      sts = _MPI_access->IRecv( recvbuf , recvcount , recvtype , target , RecvRequestId );
    else
      sts = _MPI_access->recv( recvbuf , recvcount , recvtype , target ,  RecvRequestId );
    return sts;
  }

Here is the call graph for this function:

int ParaMEDMEM::MPIAccessDEC::recv ( void *  recvbuf,
int  recvcount,
MPI_Datatype  recvtype,
int  target,
int &  RecvRequestId,
bool  Asynchronous = false 
) [inline]

Definition at line 143 of file MPIAccessDEC.hxx.

  {
    int sts;
    if ( Asynchronous )
      sts = _MPI_access->IRecv( recvbuf , recvcount , recvtype , target ,
                                RecvRequestId );
    else
      sts = _MPI_access->recv( recvbuf , recvcount , recvtype , target ,
                               RecvRequestId );
    return sts;
  }

Here is the call graph for this function:

int ParaMEDMEM::MPIAccessDEC::recv ( void *  recvbuf,
int  recvcount,
int  offset,
MPI_Datatype  recvtype,
int  target,
int &  RecvRequestId 
) [protected]

Receive recvcount datas to recvbuf[offset] with type recvtype from target of IntraCommunicator (Internal Protected method)

Returns the request identifier RecvRequestId

Definition at line 184 of file MPIAccessDEC.cxx.

  {
    int sts ;
    if ( _asynchronous )
      {
        if ( recvtype == MPI_INT )
          {
            sts = _MPI_access->IRecv( &((int *) recvbuf)[offset] , recvcount , recvtype ,
                                      target , RecvRequestId ) ;
          }
        else
          {
            sts = _MPI_access->IRecv( &((double *) recvbuf)[offset] , recvcount , recvtype ,
                                      target , RecvRequestId ) ;
          }
      }
    else
      {
        if ( recvtype == MPI_INT )
          {
            sts = _MPI_access->recv( &((int *) recvbuf)[offset] , recvcount , recvtype ,
                                     target , RecvRequestId ) ;
          }
        else
          {
            sts = _MPI_access->recv( &((double *) recvbuf)[offset] , recvcount , recvtype ,
                                     target , RecvRequestId ) ;
          }
      }
    return sts ;
  }
int ParaMEDMEM::MPIAccessDEC::send ( void *  sendbuf,
int  sendcount,
MPI_Datatype  sendtype,
int  target 
) [inline]

Definition at line 113 of file MPIAccessDEC.hxx.

  {
    int SendRequestId;
    int sts;
    if ( _asynchronous )
      {
        sts = _MPI_access->ISend( sendbuf , sendcount , sendtype , target ,
                                  SendRequestId );
      }
    else
      {
        sts = _MPI_access->send( sendbuf , sendcount , sendtype , target ,
                                 SendRequestId );
        if ( sts == MPI_SUCCESS )
          free( sendbuf );
      }
    return sts;
  }

Here is the call graph for this function:

int ParaMEDMEM::MPIAccessDEC::send ( void *  sendbuf,
int  sendcount,
int  offset,
MPI_Datatype  sendtype,
int  target,
int &  SendRequestId 
) [protected]

Send sendcount datas from sendbuf[offset] with type sendtype to target of IntraCommunicator (Internal Protected method)

Returns the request identifier SendRequestId

Definition at line 144 of file MPIAccessDEC.cxx.

  {
    int sts ;
    if ( _asynchronous )
      {
        if ( sendtype == MPI_INT )
          {
            sts = _MPI_access->ISend( &((int *) sendbuf)[offset] , sendcount , sendtype ,
                                      target , SendRequestId ) ;
          }
        else
          {
            sts = _MPI_access->ISend( &((double *) sendbuf)[offset] , sendcount , sendtype ,
                                      target , SendRequestId ) ;
          }
      }
    else
      {
        if ( sendtype == MPI_INT )
          {
            sts = _MPI_access->send( &((int *) sendbuf)[offset] , sendcount , sendtype ,
                                     target , SendRequestId ) ;
          }
        else
          {
            sts = _MPI_access->send( &((double *) sendbuf)[offset] , sendcount , sendtype ,
                                     target , SendRequestId ) ;
          }
      }
    return sts ;
  }
int ParaMEDMEM::MPIAccessDEC::sendRecv ( void *  sendbuf,
int  sendcount,
MPI_Datatype  sendtype,
void *  recvbuf,
int  recvcount,
MPI_Datatype  recvtype,
int  target 
) [inline]

Definition at line 156 of file MPIAccessDEC.hxx.

  {
    int SendRequestId;
    int RecvRequestId;
    int sts;
    if ( _asynchronous )
      sts = _MPI_access->ISendRecv( sendbuf , sendcount , sendtype , target ,
                                    SendRequestId ,
                                    recvbuf , recvcount , recvtype , target ,
                                    RecvRequestId );
    else
      sts = _MPI_access->sendRecv( sendbuf , sendcount , sendtype , target ,
                                   SendRequestId ,
                                   recvbuf , recvcount , recvtype , target ,
                                   RecvRequestId );
    return sts;
  }

Here is the call graph for this function:

int ParaMEDMEM::MPIAccessDEC::sendRecv ( void *  sendbuf,
int  sendcount,
int  sendoffset,
MPI_Datatype  sendtype,
void *  recvbuf,
int  recvcount,
int  recvoffset,
MPI_Datatype  recvtype,
int  target,
int &  SendRequestId,
int &  RecvRequestId 
) [protected]

Send sendcount datas from sendbuf[offset] with type sendtype to target of IntraCommunicator Receive recvcount datas to recvbuf[offset] with type recvtype from target of IntraCommunicator (Internal Protected method)

Returns the request identifier SendRequestId Returns the request identifier RecvRequestId

Definition at line 226 of file MPIAccessDEC.cxx.

  {
    int sts ;
    if ( _asynchronous )
      {
        if ( sendtype == MPI_INT )
          {
            if ( recvtype == MPI_INT )
              {
                sts = _MPI_access->ISendRecv( &((int *) sendbuf)[sendoffset] , sendcount ,
                                              sendtype , target , SendRequestId ,
                                              &((int *) recvbuf)[recvoffset] , recvcount ,
                                              recvtype , target , RecvRequestId ) ;
              }
            else
              {
                sts = _MPI_access->ISendRecv( &((int *) sendbuf)[sendoffset] , sendcount ,
                                              sendtype , target , SendRequestId ,
                                              &((double *) recvbuf)[recvoffset] ,
                                              recvcount , recvtype , target , RecvRequestId ) ;
              }
          }
        else
          {
            if ( recvtype == MPI_INT )
              {
                sts = _MPI_access->ISendRecv( &((double *) sendbuf)[sendoffset] , sendcount ,
                                              sendtype , target , SendRequestId ,
                                              &((int *) recvbuf)[recvoffset] ,
                                              recvcount , recvtype , target , RecvRequestId ) ;
              }
            else
              {
                sts = _MPI_access->ISendRecv( &((double *) sendbuf)[sendoffset] , sendcount ,
                                              sendtype , target , SendRequestId ,
                                              &((double *) recvbuf)[recvoffset] ,
                                              recvcount , recvtype , target , RecvRequestId ) ;
              }
          }
      }
    else
      {
        if ( sendtype == MPI_INT )
          {
            if ( recvtype == MPI_INT )
              {
                sts = _MPI_access->sendRecv( &((int *) sendbuf)[sendoffset] , sendcount ,
                                             sendtype , target , SendRequestId ,
                                             &((int *) recvbuf)[recvoffset] , recvcount ,
                                             recvtype , target , RecvRequestId ) ;
              }
            else
              {
                sts = _MPI_access->sendRecv( &((int *) sendbuf)[sendoffset] , sendcount ,
                                             sendtype , target , SendRequestId ,
                                             &((double *) recvbuf)[recvoffset] ,
                                             recvcount , recvtype , target , RecvRequestId ) ;
              }
          }
        else
          {
            if ( recvtype == MPI_INT )
              {
                sts = _MPI_access->sendRecv( &((double *) sendbuf)[sendoffset] , sendcount ,
                                             sendtype , target , SendRequestId ,
                                             &((int *) recvbuf)[recvoffset] ,
                                             recvcount , recvtype , target , RecvRequestId ) ;
              }
            else
              {
                sts = _MPI_access->sendRecv( &((double *) sendbuf)[sendoffset] , sendcount ,
                                             sendtype , target , SendRequestId ,
                                             &((double *) recvbuf)[recvoffset] ,
                                             recvcount , recvtype , target , RecvRequestId ) ;
              }
          }
      }
    return sts ;
  }
void ParaMEDMEM::MPIAccessDEC::setTime ( double  t) [inline]

Definition at line 44 of file MPIAccessDEC.hxx.

{ _t = t; _dt = -1; }

Here is the caller graph for this function:

void ParaMEDMEM::MPIAccessDEC::setTime ( double  t,
double  dt 
) [inline]

Definition at line 45 of file MPIAccessDEC.hxx.

{ _t = t; _dt = dt; }
void ParaMEDMEM::MPIAccessDEC::setTimeInterpolator ( TimeInterpolationMethod  anInterp,
double  InterpPrecision = 0,
int  n_step_before = 1,
int  nStepAfter = 1 
)

Definition at line 102 of file MPIAccessDEC.cxx.

  {
    if ( _time_interpolator )
      delete _time_interpolator ;
    switch ( aTimeInterp )
      {
      case WithoutTimeInterp :
        _time_interpolator = NULL ;
        _n_step_before = 0 ;
        _n_step_after = 0 ;
        break ;
      case LinearTimeInterp :
        _time_interpolator = new LinearTimeInterpolator( InterpPrecision , nStepBefore ,
                                                         nStepAfter ) ;
        _n_step_before = nStepBefore ;
        _n_step_after = nStepAfter ;
        int i ;
        for ( i = 0 ; i < _group_size ; i++ )
          {
            (*_time_messages)[ i ].resize( _n_step_before + _n_step_after ) ;
            (*_data_messages)[ i ].resize( _n_step_before + _n_step_after ) ;
            int j ;
            for ( j = 0 ; j < _n_step_before + _n_step_after ; j++ )
              {
                (*_time_messages)[ i ][ j ].time = -1 ;
                (*_time_messages)[ i ][ j ].deltatime = -1 ;
                (*_data_messages)[ i ][ j ] = NULL ;
              }
          }
        break ;
      }
  }

Here is the caller graph for this function:


Member Data Documentation

Definition at line 81 of file MPIAccessDEC.hxx.

std::vector< std::vector< void * > >* ParaMEDMEM::MPIAccessDEC::_data_messages [private]

Definition at line 102 of file MPIAccessDEC.hxx.

Definition at line 100 of file MPIAccessDEC.hxx.

std::vector< MPI_Datatype >* ParaMEDMEM::MPIAccessDEC::_data_messages_type [private]

Definition at line 101 of file MPIAccessDEC.hxx.

Definition at line 94 of file MPIAccessDEC.hxx.

Definition at line 89 of file MPIAccessDEC.hxx.

Definition at line 110 of file MPIAccessDEC.hxx.

Definition at line 90 of file MPIAccessDEC.hxx.

Definition at line 82 of file MPIAccessDEC.hxx.

Definition at line 88 of file MPIAccessDEC.hxx.

Definition at line 86 of file MPIAccessDEC.hxx.

Definition at line 85 of file MPIAccessDEC.hxx.

std::vector< bool >* ParaMEDMEM::MPIAccessDEC::_out_of_time [private]

Definition at line 99 of file MPIAccessDEC.hxx.

double ParaMEDMEM::MPIAccessDEC::_t [private]

Definition at line 93 of file MPIAccessDEC.hxx.

Definition at line 84 of file MPIAccessDEC.hxx.

std::vector< std::vector< TimeMessage > >* ParaMEDMEM::MPIAccessDEC::_time_messages [private]

Definition at line 97 of file MPIAccessDEC.hxx.


The documentation for this class was generated from the following files: