MercuryDPM  Trunk
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
MPIContainer Class Referencefinal

This class contains all information and functions required for communication between processors. More...

#include <MpiContainer.h>

Public Member Functions

void initialiseMercuryMPITypes (const SpeciesHandler &speciesHandler)
 Creates the MPI types required for communication of Mercury data through the MPI interface. More...
 
void sync ()
 Process all pending asynchronous communication requests before continuing. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
send (T &t, int to, int tag)
 Asynchronously send a scalar to some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
send (T *t, int count, int to, int tag)
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
receive (T &t, int from, int tag)
 asynchronously receive a scalar from some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
receive (T *t, int count, int from, int tag)
 
template<typename T >
void send (T *t, MercuryMPIType type, int count, int to, int tag)
 asynchronously send a list of MercuryMPITypes objects to some other processor. More...
 
template<typename T >
void receive (T *t, MercuryMPIType type, int count, int from, int tag)
 asynchronously receive a list of MercuryMPIType objects from some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
directSend (T &t, int count, int to, int tag)
 synchronously send a list of scalars to another processor. the data should be received directly or the program will stall More...
 
template<typename T >
void directSend (T *t, MercuryMPIType type, int count, int to, int tag)
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
directReceive (T &t, int count, int from, int tag)
 synchronously receive a list of scalars from another processor. if the send command has not been issued, this function will stall the program More...
 
template<typename T >
void directReceive (T *t, MercuryMPIType type, int count, int from, int tag)
 
template<typename T >
void gather (T &send_t, T *receive_t)
 Gathers a scaler from all processors to a vector of scalars on the root. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
broadcast (T &t, int fromProcessor=0)
 Broadcasts a scalar from the root to all other processors. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
broadcast (T *t, int size, int fromProcessor)
 Broadcasts a scalar from the root to all other processors. More...
 
template<typename T >
void broadcast (T *t, MercuryMPIType type, int fromProcessor=0)
 Broadcasts an MercuryMPIType to all other processors. More...
 
std::size_t getProcessorID ()
 Reduces a scalar on all processors to one scalar on a target processor. More...
 
std::size_t getNumberOfProcessors () const
 Get the total number of processors participating in this simulation. More...
 
template<typename T >
void createMercuryMPIType (T t, MercuryMPIType type)
 Get the communicator used for MPI commands. More...
 
void deleteMercuryMPITypes ()
 Deletes the MercuryMPITypes. More...
 
 MPIContainer (const MPIContainer &orig)=delete
 Copy constructor is disabled, to enforce a singleton pattern. More...
 

Static Public Member Functions

static MPIContainerInstance ()
 fetch the instance to be used for communication More...
 

Private Member Functions

 MPIContainer ()
 Constructor. More...
 

Private Attributes

std::size_t processorID_
 The ID of the processor this class is running on. More...
 
std::size_t numberOfProcessors_
 The total number of processors in the communicator. More...
 

Detailed Description

This class contains all information and functions required for communication between processors.

This class contains of a newly created MPI_COMM communicator used communicate between processors. The reason that MPI_COMM_WORLD is not used, is that other libraries might also be communicating in parallel and you don't want to interfere with these communications. It is a signleton pattern, restricting to one instance of the class. The class furthermore keeps track of the number of processors, processorID's and of asynchronous send/receive requests.

Definition at line 125 of file MpiContainer.h.

Constructor & Destructor Documentation

MPIContainer::MPIContainer ( const MPIContainer orig)
delete

Copy constructor is disabled, to enforce a singleton pattern.

MPIContainer::MPIContainer ( )
private

Constructor.

Initialise the communicator after MPI has been initalised.

Definition at line 43 of file MpiContainer.cc.

References FATAL, logger, numberOfProcessors_, and processorID_.

44 {
45 #ifdef MERCURY_USE_MPI
46  if(!MPI::Is_initialized())
47  {
48  logger(FATAL,"MPI should be initialised before calling the MPIContainer constructor");
49  }
50  //A personal communicator will be created to ensure we don't meddle with communicators of other libraries
51  MPI::Group groupID = MPI::COMM_WORLD.Get_group();
52  communicator_ = MPI::COMM_WORLD.Create( groupID );
53  processorID_ = communicator_.Get_rank();
54  numberOfProcessors_ = communicator_.Get_size();
55 #else
57  processorID_ = 0;
58 #endif
59 }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
std::size_t numberOfProcessors_
The total number of processors in the communicator.
Definition: MpiContainer.h:595

Member Function Documentation

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::broadcast ( T &  t,
int  fromProcessor = 0 
)
inline

Broadcasts a scalar from the root to all other processors.

Parameters
[in,out]tscalar data that is being send by the root and received by the processors

Definition at line 429 of file MpiContainer.h.

Referenced by ParticleHandler::addObject(), RandomClusterInsertionBoundary::checkBoundaryBeforeTimeStep(), InsertionBoundary::checkBoundaryBeforeTimeStep(), DPMBase::checkParticleForInteraction(), PeriodicBoundaryHandler::communicateTargetDomains(), SubcriticalMaserBoundaryTEST::copyExtraParticles(), SubcriticalMaserBoundaryTEST::extendBottom(), ParticleHandler::getNumberOfRealObjects(), DPMBase::mpiIsInCommunicationZone(), RNG::randomise(), DPMBase::synchroniseParticle(), and PeriodicBoundaryHandler::updateParticleStatus().

430  {
431 #ifdef MERCURY_USE_MPI
432  communicator_.Bcast(&t,1,Detail::toMPIType(t),fromProcessor);
433 #endif
434  }
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::broadcast ( T *  t,
int  size,
int  fromProcessor 
)
inline

Broadcasts a scalar from the root to all other processors.

Parameters
[in,out]tscalar data that is being send by the root and received by the processors

Definition at line 442 of file MpiContainer.h.

443  {
444 #ifdef MERCURY_USE_MPI
445  communicator_.Bcast((void *)t,size, Detail::toMPIType(t[0]),fromProcessor);
446 #endif
447  }
template<typename T >
void MPIContainer::broadcast ( T *  t,
MercuryMPIType  type,
int  fromProcessor = 0 
)
inline

Broadcasts an MercuryMPIType to all other processors.

Parameters
[in,out]tMercuryMPIType data that is being send by the root and received by the processors

Definition at line 454 of file MpiContainer.h.

455  {
456 #ifdef MERCURY_USE_MPI
457  communicator_.Bcast((void *)t,1,dataTypes_[type],fromProcessor);
458 #endif
459  }
template<typename T >
void MPIContainer::createMercuryMPIType ( t,
MercuryMPIType  type 
)
inline

Get the communicator used for MPI commands.

Creates the MPI types telling the MPI interface how each data object looks like

NOTE: The current manner of creating MPI data types might not be compatible when computing on different computers with different architecture and compilers. The "padding" which different compilers and computers add to the class might be different which has a huge effect on the mpi data type because it needs to be consistent over all computers. To resolve this, one can easily create a consistent data type for all types required. I wish goodluck to the person that needs it, because it is a lot of type work and for now I am not doing it. /MX

Definition at line 551 of file MpiContainer.h.

Referenced by Interaction< NormalForceInteraction, FrictionForceInteraction, AdhesiveForceInteraction >::createMPIType(), and initialiseMercuryMPITypes().

552  {
553 #ifdef MERCURY_USE_MPI
554  MPI_Datatype MPIType;
555  MPI_Type_contiguous(sizeof(T), MPI_BYTE, &MPIType);
556  MPI_Type_commit(&MPIType);
557  dataTypes_.push_back(MPIType);
558 #endif
559  }
void MPIContainer::deleteMercuryMPITypes ( )
inline

Deletes the MercuryMPITypes.

Definition at line 565 of file MpiContainer.h.

Referenced by initialiseMPI().

566  {
567 #ifdef MERCURY_USE_MPI
568  for(MPI_Datatype type : dataTypes_)
569  {
570  MPI_Type_free(&type);
571  }
572 #endif
573  }
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::directReceive ( T &  t,
int  count,
int  from,
int  tag 
)
inline

synchronously receive a list of scalars from another processor. if the send command has not been issued, this function will stall the program

Parameters
[in,out]tthe data, list of scalars
[in]countthe number of scalars to be send
[in]fromthe processor that sends the information
[in]taga unique identifier that corresponds with a send command by the sending processor

Definition at line 369 of file MpiContainer.h.

References processorID_.

370  {
371 #if MERCURY_ASSERTS
372  if (from == processorID_)
373  {
374  std::cout << "[MPI FATAL]: Receiving data from self!" << std::endl;
375  std::exit(-1);
376  }
377 
378  if (count == 0)
379  {
380  std::cout << "[MPI ERROR]: Receiving zero data" << std::endl;
381  }
382 #endif
383 #ifdef MERCURY_USE_MPI
384  communicator_.Recv(&t, count, Detail::toMPIType(t), from, tag);
385 #endif
386  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
void MPIContainer::directReceive ( T *  t,
MercuryMPIType  type,
int  count,
int  from,
int  tag 
)
inline

Definition at line 389 of file MpiContainer.h.

References processorID_.

390  {
391 #if MERCURY_ASSERTS
392  if (from == processorID_)
393  {
394  std::cout << "[MPI FATAL]: Receiving data to self!" << std::endl;
395  std::exit(-1);
396  }
397 
398  if (count == 0)
399  {
400  std::cout << "[MPI ERROR]: Receiving zero data" << std::endl;
401  }
402 #endif
403 #ifdef MERCURY_USE_MPI
404  communicator_.Recv(t, count, dataTypes_[type], from, tag);
405 #endif
406  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::directSend ( T &  t,
int  count,
int  to,
int  tag 
)
inline

synchronously send a list of scalars to another processor. the data should be received directly or the program will stall

Parameters
[in,out]tthe data, list of scalars
[in]countthe number of scalars to be send
[in]tothe processor that the data is being send to
[in]taga unique identifier that corresponds with a receive command by the receiving processor

Definition at line 319 of file MpiContainer.h.

References processorID_.

320  {
321 #if MERCURY_ASSERTS
322  if (to == processorID_)
323  {
324  std::cout << "[MPI FATAL]: Sending data to self!" << std::endl;
325  std::exit(-1);
326  }
327 
328  if (count == 0)
329  {
330  std::cout << "[MPI ERROR]: Sending zero data" << std::endl;
331  }
332 #endif
333 #ifdef MERCURY_USE_MPI
334  communicator_.Ssend(&t, count, Detail::toMPIType(t), to, tag);
335 #endif
336  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
void MPIContainer::directSend ( T *  t,
MercuryMPIType  type,
int  count,
int  to,
int  tag 
)
inline

Definition at line 340 of file MpiContainer.h.

References processorID_.

341  {
342 #if MERCURY_ASSERTS
343  if (to == processorID_)
344  {
345  std::cout << "[MPI FATAL]: Sending data to self!" << std::endl;
346  std::exit(-1);
347  }
348 
349  if (count == 0)
350  {
351  std::cout << "[MPI ERROR]: Sending zero data" << std::endl;
352  }
353 #endif
354 #ifdef MERCURY_USE_MPI
355  communicator_.Ssend(t, count, dataTypes_[type], to, tag);
356 #endif
357  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
void MPIContainer::gather ( T &  send_t,
T *  receive_t 
)
inline

Gathers a scaler from all processors to a vector of scalars on the root.

When a single processor needs to know a certain value on all processors, this function will gathers them together at the root processor in an array of the size of the communicator size

Parameters
[in,out]send_tthe data that is being send, scalar value
[in,out]receive_tthe data that is being received by the root, an array of scalars

Definition at line 416 of file MpiContainer.h.

Referenced by DPMBase::checkParticleForInteraction(), and DPMBase::mpiIsInCommunicationZone().

417  {
418 #ifdef MERCURY_USE_MPI
419  communicator_.Gather(&send_t, 1, Detail::toMPIType(send_t), receive_t, 1, Detail::toMPIType(send_t), 0);
420 #endif
421  }
std::size_t MPIContainer::getNumberOfProcessors ( ) const
std::size_t MPIContainer::getProcessorID ( )

Reduces a scalar on all processors to one scalar on a target processor.

Obtain the current processor ID.

A scalar defined on all processors is reduced to one number by an operation examples of operations are MPI::MAX, MPI::MIN and MPI::SUM, giving the maximum, minumum or the summation of the given scalars. The resulting reduced scalar is then sent to the target processor id, generally this is the root processor 0.

Parameters
[in,out]tScalar that needs to be collected and reduced to one scalar
[in]operationOperation that is performed on the collected scalars
[in]idOptional input, receiving processor of the reduced scalar

AllReduces a scalar on all processors by a given MPI operation

A local scalar on all processors is reduced to one scalar as output the reduction follows the operation rule given.

Parameters
[in]send_tthe scalar that is send to be reduced
[out]receive_tthe reduced scalar
[in]operationThe operation that is performed on all local numbers to reduce it to a single number

allGather takes a (different) scalar from all processors and returns a vector with all scalars

sometimes it is required to share a local scalar such as number of particles to all other processors. With allGather all processors now now the local scalar of all other processors

Parameters
[in]send_tThe scalar that is send to all other processors
[in]send_countthe number of scalars send to other processors
[in,out]receive_tA vector of scalars that contain all other scalars from other processors
[in]receive_countthe number of scalars that is received by each processor

Get the unique identifier associated with this processor.

Returns
Returns the processor ID in the communication group

Definition at line 109 of file MpiContainer.cc.

References processorID_.

Referenced by ParticleHandler::addGhostObject(), ParticleHandler::addObject(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), printError(), printInfo(), printMessage(), DPMBase::printTime(), DPMBase::synchroniseParticle(), and PeriodicBoundaryHandler::updateParticleStatus().

110 {
111  return processorID_;
112 }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
void MPIContainer::initialiseMercuryMPITypes ( const SpeciesHandler speciesHandler)

Creates the MPI types required for communication of Mercury data through the MPI interface.

Definition at line 70 of file MpiContainer.cc.

References createMercuryMPIType(), BaseInteraction::createMPIType(), BaseSpecies::deleteEmptyInteraction(), FORCE, BaseSpecies::getEmptyInteraction(), BaseHandler< T >::getObject(), PARTICLE, POSITION, and VELOCITY.

Referenced by DPMBase::decompose().

71 {
72 #ifdef MERCURY_USE_MPI
73  //Note: Important that the MPI type creation is done in the order given by the enum
74  MPIParticle dummyParticle;
75  MPIParticlePosition dummyPosition;
76  MPIParticleVelocity dummyVelocity;
77  MPIParticleForce dummyForce;
82 
83  //Obtain the correct history force class
84  //NOTE: only works for one type of interaction
85  if (dataTypes_.size() == 4)
86  {
87  //Create a dummy interaction to get a grip on the size of the MPI interaction class
88  const BaseSpecies* species = speciesHandler.getObject(0);
89  BaseInteraction* emptyInteraction = species->getEmptyInteraction();
90  emptyInteraction->createMPIType();
91  species->deleteEmptyInteraction(emptyInteraction);
92  }
93 #endif
94 }
virtual void deleteEmptyInteraction(BaseInteraction *interaction) const =0
BaseSpecies is the class from which all other species are derived.
Definition: BaseSpecies.h:49
Data class to send a particle velocity over MPI.
Definition: MpiDataClass.h:102
virtual void createMPIType()
Stores information about interactions between two interactable objects; often particles but could be ...
T * getObject(const unsigned int id)
Gets a pointer to the Object at the specified index in the BaseHandler.
Definition: BaseHandler.h:613
Data class to send a particle over MPI.
Definition: MpiDataClass.h:81
virtual BaseInteraction * getEmptyInteraction() const =0
Data class to send a particle position over MPI.
Definition: MpiDataClass.h:89
void createMercuryMPIType(T t, MercuryMPIType type)
Get the communicator used for MPI commands.
Definition: MpiContainer.h:551
Data class to send a particle force over MPI.
Definition: MpiDataClass.h:113
static MPIContainer& MPIContainer::Instance ( )
inlinestatic

fetch the instance to be used for communication

Returns
The only instance of this class

Definition at line 130 of file MpiContainer.h.

Referenced by ParticleHandler::addGhostObject(), Domain::addNewParticles(), ParticleHandler::addObject(), Domain::addParticle(), RandomClusterInsertionBoundary::checkBoundaryBeforeTimeStep(), InsertionBoundary::checkBoundaryBeforeTimeStep(), DPMBase::checkParticleForInteraction(), CircularPeriodicBoundary::CircularPeriodicBoundary(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::communicateTargetDomains(), AngledPeriodicBoundary::copy(), SubcriticalMaserBoundaryTEST::copyExtraParticles(), Interaction< NormalForceInteraction, FrictionForceInteraction, AdhesiveForceInteraction >::createMPIType(), DPMBase::decompose(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), SubcriticalMaserBoundaryTEST::extendBottom(), ParticleHandler::getKineticEnergy(), ParticleHandler::getLargestInteractionRadius(), ParticleHandler::getMass(), ParticleHandler::getMassTimesPosition(), ParticleHandler::getMeanRadius(), getMPISum(), ParticleHandler::getNumberOfFixedObjects(), ParticleHandler::getNumberOfObjects(), ParticleHandler::getNumberOfRealObjects(), ParticleHandler::getNumberOfRealObjectsLocal(), ParticleHandler::getParticleAttribute(), ParticleHandler::getRotationalEnergy(), ParticleHandler::getSmallestInteractionRadius(), ParticleSpecies::getSmallestParticleMass(), ParticleHandler::getVolume(), MercuryBase::hGridInfo(), initialiseMPI(), LeesEdwardsBoundary::LeesEdwardsBoundary(), DPMBase::mpiIsInCommunicationZone(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::prepareNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), printError(), printInfo(), printMessage(), DPMBase::printTime(), RNG::randomise(), ParticleHandler::removeObject(), Domain::sendAndReceiveCount(), Domain::sendAndReceiveMPIData(), DPMBase::synchroniseParticle(), PeriodicBoundaryHandler::updateParticles(), PeriodicBoundaryHandler::updateParticleStatus(), Domain::updateStatus(), Domain::updateVelocity(), and InteractionHandler::write().

131  {
132  static MPIContainer theInstance;
133  return theInstance;
134  }
This class contains all information and functions required for communication between processors...
Definition: MpiContainer.h:125
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::receive ( T &  t,
int  from,
int  tag 
)
inline

asynchronously receive a scalar from some other processor.

Parameters
[in,out]tthe data
[in]fromthe processor that sends the information
[in]tagan identifier for this specific receive request. This must be unique among all receive requests between the previous synchronisation step and the next one. Exactly one send request must also provide this tag and it must be done on the process specified by the 'from' parameter

Definition at line 213 of file MpiContainer.h.

References processorID_.

Referenced by ParticleHandler::addGhostObject(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::sendAndReceiveCount(), and Domain::sendAndReceiveMPIData().

214  {
215 #if MERCURY_ASSERTS
216  if (from == processorID_)
217  {
218  std::cout << "[MPI FATAL]: Receiving data from self!" << std::endl;
219  std::exit(-1);
220  }
221 #endif
222 #ifdef MERCURY_USE_MPI
223  pending_.push_back(communicator_.Irecv(&t, 1, Detail::toMPIType(t), from, tag ));
224 #endif
225  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::receive ( T *  t,
int  count,
int  from,
int  tag 
)
inline
Todo:
MX: type documentation. this is used to receive vectors of scalars accross

Definition at line 230 of file MpiContainer.h.

References processorID_.

231  {
232 #if MERCURY_ASSERTS
233  if (from == processorID_)
234  {
235  std::cout << "[MPI FATAL]: Receiving data fromself!" << std::endl;
236  std::exit(-1);
237  }
238 
239  if (count == 0)
240  {
241  std::cout << "[MPI ERROR]: Receiving zero data" << std::endl;
242  }
243 #endif
244 #ifdef MERCURY_USE_MPI
245  pending_.push_back(communicator_.Irecv(t, count, Detail::toMPIType(*t), from, tag ));
246 #endif
247  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
void MPIContainer::receive ( T *  t,
MercuryMPIType  type,
int  count,
int  from,
int  tag 
)
inline

asynchronously receive a list of MercuryMPIType objects from some other processor.

Parameters
[in,out]tthe data, list of MercuryMPIType objects
[in]typethe MPIType that is being received, for instance an MPIParticle
[in]countthe number of objects that are being send
[in]fromthe processor that sends the information
[in]taga unique identifier that corresponds with a send command by the sending processor

Definition at line 289 of file MpiContainer.h.

References processorID_.

290  {
291  //std::cout << "[Process: " << processorID_ << "]: QUEUING RECEIVE REQUEST with tag: " << tag << std::endl;
292 #if MERCURY_ASSERTS
293  if (from == processorID_)
294  {
295  std::cout << "[MPI FATAL]: Receiving data to self!" << std::endl;
296  std::exit(-1);
297  }
298 
299  if (count == 0)
300  {
301  std::cout << "[MPI ERROR]: Receiving zero data" << std::endl;
302  }
303 #endif
304 #ifdef MERCURY_USE_MPI
305  pending_.push_back(communicator_.Irecv(t, count, dataTypes_[type], from, tag ));
306 #endif
307  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::send ( T &  t,
int  to,
int  tag 
)
inline

Asynchronously send a scalar to some other processor.

Parameters
[in]tthe data
[in]tothe processor to recieve the information
[in]tagan identifier for this specific send request. This must be unique among all send requests between the previous synchronisation step and the next one. Exactly one recieve request must also provide this tag and it must be done on the process specified by the 'to' parameter

Definition at line 167 of file MpiContainer.h.

References processorID_.

Referenced by ParticleHandler::addGhostObject(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::sendAndReceiveCount(), and Domain::sendAndReceiveMPIData().

168  {
169 #if MERCURY_ASSERTS
170  if (to == processorID_)
171  {
172  std::cout << "[MPI FATAL]: Sending data to self!" << std::endl;
173  std::exit(-1);
174  }
175 #endif
176 #ifdef MERCURY_USE_MPI
177  pending_.push_back(communicator_.Isend(&t, 1, Detail::toMPIType(t), to, tag ));
178 #endif
179  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::send ( T *  t,
int  count,
int  to,
int  tag 
)
inline
Todo:
MX: type documentation. This one is used to send vectors of scalars across (hence the *t)

Definition at line 184 of file MpiContainer.h.

References processorID_.

185  {
186 #if MERCURY_ASSERTS
187  if (to == processorID_)
188  {
189  std::cout << "[MPI FATAL]: Sending data to self!" << std::endl;
190  std::exit(-1);
191  }
192 
193  if (count == 0)
194  {
195  std::cout << "[MPI ERROR]: Sending zero data" << std::endl;
196  }
197 #endif
198 #ifdef MERCURY_USE_MPI
199  pending_.push_back(communicator_.Isend(t, count, Detail::toMPIType(*t), to, tag ));
200 #endif
201  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
template<typename T >
void MPIContainer::send ( T *  t,
MercuryMPIType  type,
int  count,
int  to,
int  tag 
)
inline

asynchronously send a list of MercuryMPITypes objects to some other processor.

Parameters
[in,out]tthe data, list of MPIType objects
[in]typethe MPIType that is being send, for instance an MPIParticle
[in]countthe number of objects that are being send
[in]tothe processor that the data is send to
[in]taga unique identifier that corresponds with a receive command by the receiving processor

Definition at line 259 of file MpiContainer.h.

References processorID_.

260  {
261  //std::cout << "[Process: " << processorID_ << "]: QUEUING SEND REQUEST with tag: " << tag << std::endl;
262 #if MERCURY_ASSERTS
263  if (to == processorID_)
264  {
265  std::cout << "[MPI FATAL]: Sending data to self!" << std::endl;
266  std::exit(-1);
267  }
268 
269  if (count == 0)
270  {
271  std::cout << "[MPI ERROR]: Sending zero data" << std::endl;
272  }
273 #endif
274 #ifdef MERCURY_USE_MPI
275  pending_.push_back(communicator_.Isend(t, count, dataTypes_[type], to, tag ));
276 
277 #endif
278  }
std::size_t processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:590
void MPIContainer::sync ( )
inline

Process all pending asynchronous communication requests before continuing.

Each processor can commit a send and receive request at any time when is is finished with computing. At some point these requests have to be resolved in order to keep order in the simulation. After all requests are resolved, a barrier ensures that all processors wait untill all requests are resolved.

Definition at line 148 of file MpiContainer.h.

Referenced by ParticleHandler::addGhostObject(), Domain::addNewParticles(), Domain::addParticle(), DPMBase::decompose(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::prepareNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::updateStatus(), and Domain::updateVelocity().

149  {
150 #ifdef MERCURY_USE_MPI
151  MPI::Request::Waitall(pending_.size(),pending_.data());
152  pending_.clear();
153  communicator_.Barrier();
154 #endif
155  }

Member Data Documentation

std::size_t MPIContainer::numberOfProcessors_
private

The total number of processors in the communicator.

Definition at line 595 of file MpiContainer.h.

Referenced by getNumberOfProcessors(), and MPIContainer().

std::size_t MPIContainer::processorID_
private

The ID of the processor this class is running on.

Definition at line 590 of file MpiContainer.h.

Referenced by directReceive(), directSend(), getProcessorID(), MPIContainer(), receive(), and send().


The documentation for this class was generated from the following files: