MercuryDPM  Trunk
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
MPIContainer Class Referencefinal

This class contains all information and functions required for communication between processors. More...

#include <MpiContainer.h>

Public Member Functions

void initialiseMercuryMPITypes (const SpeciesHandler &speciesHandler)
 Creates the MPI types required for communication of Mercury data through the MPI interface. More...
 
void sync ()
 Process all pending asynchronous communication requests before continuing. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
send (T &t, int to, int tag)
 Asynchronously send a scalar to some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
send (T *t, int count, int to, int tag)
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
receive (T &t, int from, int tag)
 asynchronously receive a scalar from some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
receive (T *t, int count, int from, int tag)
 
template<typename T >
void send (T *t, MercuryMPIType type, int count, int to, int tag)
 asynchronously send a list of MercuryMPITypes objects to some other processor. More...
 
template<typename T >
void receive (T *t, MercuryMPIType type, int count, int from, int tag)
 asynchronously receive a list of MercuryMPIType objects from some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
directSend (T &t, int count, int to, int tag)
 synchronously send a list of scalars to another processor. the data should be received directly or the program will stall More...
 
template<typename T >
void directSend (T *t, MercuryMPIType type, int count, int to, int tag)
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
directReceive (T &t, int count, int from, int tag)
 synchronously receive a list of scalars from another processor. if the send command has not been issued, this function will stall the program More...
 
template<typename T >
void directReceive (T *t, MercuryMPIType type, int count, int from, int tag)
 
template<typename T >
void gather (T &send_t, T *receive_t)
 Gathers a scaler from all processors to a vector of scalars on the root. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
broadcast (T &t, int fromProcessor=0)
 Broadcasts a scalar from the root to all other processors. More...
 
template<typename T >
std::enable_if< std::is_scalar
< T >::value, void >::type 
broadcast (T *t, int size, int fromProcessor)
 Broadcasts a scalar from the root to all other processors. More...
 
template<typename T >
void broadcast (T *t, MercuryMPIType type, int fromProcessor=0)
 Broadcasts an MercuryMPIType to all other processors. More...
 
std::size_t getProcessorID ()
 Reduces a scalar on all processors to one scalar on a target processor. More...
 
std::size_t getNumberOfProcessors () const
 Get the total number of processors participating in this simulation. More...
 
template<typename T >
void createMercuryMPIType (T t, MercuryMPIType type)
 Get the communicator used for MPI commands. More...
 
void deleteMercuryMPITypes ()
 Deletes the MercuryMPITypes. More...
 
 MPIContainer (const MPIContainer &orig)=delete
 Copy constructor is disabled, to enforce a singleton pattern. More...
 

Static Public Member Functions

static MPIContainerInstance ()
 fetch the instance to be used for communication More...
 

Private Member Functions

 MPIContainer ()
 Constructor. More...
 

Private Attributes

int processorID_
 The ID of the processor this class is running on. More...
 
int numberOfProcessors_
 The total number of processors in the communicator. More...
 

Detailed Description

This class contains all information and functions required for communication between processors.

This class contains of a newly created MPI_COMM communicator used communicate between processors. The reason that MPI_COMM_WORLD is not used, is that other libraries might also be communicating in parallel and you don't want to interfere with these communications. It is a signleton pattern, restricting to one instance of the class. The class furthermore keeps track of the number of processors, processorID's and of asynchronous send/receive requests.

Definition at line 129 of file MpiContainer.h.

Constructor & Destructor Documentation

MPIContainer::MPIContainer ( const MPIContainer orig)
delete

Copy constructor is disabled, to enforce a singleton pattern.

MPIContainer::MPIContainer ( )
private

Constructor.

Initialise the communicator after MPI has been initalised.

Definition at line 43 of file MpiContainer.cc.

References FATAL, logger, numberOfProcessors_, and processorID_.

44 {
45 #ifdef MERCURY_USE_MPI
46  int Mpi_init_flag = 0;
47  MPI_Initialized(&Mpi_init_flag);
48  if(!Mpi_init_flag)
49  {
50  logger(FATAL,"MPI should be initialised before calling the MPIContainer constructor");
51  }
52  //A personal communicator will be created to ensure we don't meddle with communicators of other libraries
53  MPI_Group groupID;
54  MPI_Comm_group(MPI_COMM_WORLD, &groupID);
55  MPI_Comm_create(MPI_COMM_WORLD,groupID,&communicator_);
56  MPI_Comm_rank(MPI_COMM_WORLD,&processorID_);
57  MPI_Comm_size(MPI_COMM_WORLD,&numberOfProcessors_);
58 
59 #else
61  processorID_ = 0;
62 #endif
63 }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
int numberOfProcessors_
The total number of processors in the communicator.
Definition: MpiContainer.h:609

Member Function Documentation

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::broadcast ( T &  t,
int  fromProcessor = 0 
)
inline

Broadcasts a scalar from the root to all other processors.

Parameters
[in,out]tscalar data that is being send by the root and received by the processors

Definition at line 441 of file MpiContainer.h.

Referenced by ParticleHandler::addObject(), RandomClusterInsertionBoundary::checkBoundaryBeforeTimeStep(), InsertionBoundary::checkBoundaryBeforeTimeStep(), DPMBase::checkParticleForInteraction(), PeriodicBoundaryHandler::communicateTargetDomains(), SubcriticalMaserBoundaryTEST::copyExtraParticles(), SubcriticalMaserBoundaryTEST::extendBottom(), ParticleHandler::getNumberOfRealObjects(), DPMBase::mpiIsInCommunicationZone(), RNG::randomise(), DPMBase::synchroniseParticle(), and PeriodicBoundaryHandler::updateParticleStatus().

442  {
443 #ifdef MERCURY_USE_MPI
444  MPI_Bcast(&t,1,Detail::toMPIType(t),fromProcessor,communicator_);
445 #endif
446  }
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::broadcast ( T *  t,
int  size,
int  fromProcessor 
)
inline

Broadcasts a scalar from the root to all other processors.

Parameters
[in,out]tscalar data that is being send by the root and received by the processors

Definition at line 454 of file MpiContainer.h.

455  {
456 #ifdef MERCURY_USE_MPI
457  MPI_Bcast((void *)t,size,Detail::toMPIType(t[0]),fromProcessor,communicator_);
458 
459 #endif
460  }
template<typename T >
void MPIContainer::broadcast ( T *  t,
MercuryMPIType  type,
int  fromProcessor = 0 
)
inline

Broadcasts an MercuryMPIType to all other processors.

Parameters
[in,out]tMercuryMPIType data that is being send by the root and received by the processors

Definition at line 467 of file MpiContainer.h.

468  {
469 #ifdef MERCURY_USE_MPI
470  MPI_Bcast((void *)t,1,dataTypes_[type],fromProcessor,communicator_);
471 #endif
472  }
template<typename T >
void MPIContainer::createMercuryMPIType ( t,
MercuryMPIType  type 
)
inline

Get the communicator used for MPI commands.

Creates the MPI types telling the MPI interface how each data object looks like

NOTE: The current manner of creating MPI data types might not be compatible when computing on different computers with different architecture and compilers. The "padding" which different compilers and computers add to the class might be different which has a huge effect on the mpi data type because it needs to be consistent over all computers. To resolve this, one can easily create a consistent data type for all types required. I wish goodluck to the person that needs it, because it is a lot of type work and for now I am not doing it. /MX

Definition at line 564 of file MpiContainer.h.

Referenced by Interaction< NormalForceInteraction, FrictionForceInteraction, AdhesiveForceInteraction >::createMPIType(), and initialiseMercuryMPITypes().

565  {
566 #ifdef MERCURY_USE_MPI
567  MPI_Datatype MPIType;
568  MPI_Type_contiguous(sizeof(T), MPI_BYTE, &MPIType);
569  MPI_Type_commit(&MPIType);
570  dataTypes_.push_back(MPIType);
571 #endif
572  }
void MPIContainer::deleteMercuryMPITypes ( )
inline

Deletes the MercuryMPITypes.

Definition at line 578 of file MpiContainer.h.

Referenced by initialiseMPI().

579  {
580 #ifdef MERCURY_USE_MPI
581  for(MPI_Datatype type : dataTypes_)
582  {
583  MPI_Type_free(&type);
584  }
585 #endif
586  }
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::directReceive ( T &  t,
int  count,
int  from,
int  tag 
)
inline

synchronously receive a list of scalars from another processor. if the send command has not been issued, this function will stall the program

Parameters
[in,out]tthe data, list of scalars
[in]countthe number of scalars to be send
[in]fromthe processor that sends the information
[in]taga unique identifier that corresponds with a send command by the sending processor

Definition at line 383 of file MpiContainer.h.

References FATAL, logger, processorID_, and WARN.

384  {
385 #if MERCURY_ASSERTS
386  if (from == processorID_)
387  {
388  logger(FATAL, "[MPI FATAL]: Receiving data from self!");
389  }
390 
391  if (count == 0)
392  {
393  logger(WARN, "[MPI ERROR]: Receiving zero data");
394  }
395 #endif
396 #ifdef MERCURY_USE_MPI
397  MPI_Recv(&t, count, Detail::toMPIType(t), from, tag,communicator_, MPI_STATUS_IGNORE);
398 #endif
399  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
void MPIContainer::directReceive ( T *  t,
MercuryMPIType  type,
int  count,
int  from,
int  tag 
)
inline

Definition at line 402 of file MpiContainer.h.

References FATAL, logger, processorID_, and WARN.

403  {
404 #if MERCURY_ASSERTS
405  if (from == processorID_)
406  {
407  logger(FATAL, "[MPI FATAL]: Receiving data to self!");
408  }
409 
410  if (count == 0)
411  {
412  logger(WARN, "[MPI ERROR]: Receiving zero data");
413  }
414 #endif
415 #ifdef MERCURY_USE_MPI
416  MPI_Recv(t, count, dataTypes_[type], from, tag, communicator_, MPI_STATUS_IGNORE);
417 #endif
418  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::directSend ( T &  t,
int  count,
int  to,
int  tag 
)
inline

synchronously send a list of scalars to another processor. the data should be received directly or the program will stall

Parameters
[in,out]tthe data, list of scalars
[in]countthe number of scalars to be send
[in]tothe processor that the data is being send to
[in]taga unique identifier that corresponds with a receive command by the receiving processor

Definition at line 335 of file MpiContainer.h.

References FATAL, logger, processorID_, and WARN.

336  {
337 #if MERCURY_ASSERTS
338  if (to == processorID_)
339  {
340  logger(FATAL, "[MPI FATAL]: Sending data to self!");
341  }
342 
343  if (count == 0)
344  {
345  logger(WARN, "[MPI ERROR]: Sending zero data");
346  }
347 #endif
348 #ifdef MERCURY_USE_MPI
349  MPI_Ssend(&t, count, Detail::toMPIType(t), to, tag, communicator_);
350 #endif
351  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
void MPIContainer::directSend ( T *  t,
MercuryMPIType  type,
int  count,
int  to,
int  tag 
)
inline

Definition at line 355 of file MpiContainer.h.

References FATAL, logger, processorID_, and WARN.

356  {
357 #if MERCURY_ASSERTS
358  if (to == processorID_)
359  {
360  logger(FATAL, "[MPI FATAL]: Sending data to self!");
361  }
362 
363  if (count == 0)
364  {
365  logger(WARN, "[MPI ERROR]: Sending zero data");
366  }
367 #endif
368 #ifdef MERCURY_USE_MPI
369  MPI_Ssend(t,count,dataTypes_[type], to, tag, communicator_);
370 #endif
371  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
void MPIContainer::gather ( T &  send_t,
T *  receive_t 
)
inline

Gathers a scaler from all processors to a vector of scalars on the root.

When a single processor needs to know a certain value on all processors, this function will gathers them together at the root processor in an array of the size of the communicator size

Parameters
[in,out]send_tthe data that is being send, scalar value
[in,out]receive_tthe data that is being received by the root, an array of scalars

Definition at line 428 of file MpiContainer.h.

Referenced by DPMBase::checkParticleForInteraction(), and DPMBase::mpiIsInCommunicationZone().

429  {
430 #ifdef MERCURY_USE_MPI
431  MPI_Gather(&send_t, 1, Detail::toMPIType(send_t), receive_t, 1, Detail::toMPIType(send_t), 0, communicator_);
432 #endif
433  }
std::size_t MPIContainer::getNumberOfProcessors ( ) const

Get the total number of processors participating in this simulation.

Obtain the number of processors.

Returns
Returns the number of processors

Definition at line 104 of file MpiContainer.cc.

References numberOfProcessors_.

Referenced by CircularPeriodicBoundary::CircularPeriodicBoundary(), AngledPeriodicBoundary::copy(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), ParticleHandler::getNumberOfRealObjectsLocal(), MercuryBase::hGridInfo(), LeesEdwardsBoundary::LeesEdwardsBoundary(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), ParticleHandler::removeObject(), and PeriodicBoundaryHandler::updateParticleStatus().

105 {
106  return numberOfProcessors_;
107 }
int numberOfProcessors_
The total number of processors in the communicator.
Definition: MpiContainer.h:609
std::size_t MPIContainer::getProcessorID ( )

Reduces a scalar on all processors to one scalar on a target processor.

Obtain the current processor ID.

A scalar defined on all processors is reduced to one number by an operation examples of operations are MPI::MAX, MPI::MIN and MPI::SUM, giving the maximum, minumum or the summation of the given scalars. The resulting reduced scalar is then sent to the target processor id, generally this is the root processor 0.

Parameters
[in,out]tScalar that needs to be collected and reduced to one scalar
[in]operationOperation that is performed on the collected scalars
[in]idOptional input, receiving processor of the reduced scalar

AllReduces a scalar on all processors by a given MPI operation

A local scalar on all processors is reduced to one scalar as output the reduction follows the operation rule given.

Parameters
[in]send_tthe scalar that is send to be reduced
[out]receive_tthe reduced scalar
[in]operationThe operation that is performed on all local numbers to reduce it to a single number

allGather takes a (different) scalar from all processors and returns a vector with all scalars

sometimes it is required to share a local scalar such as number of particles to all other processors. With allGather all processors now now the local scalar of all other processors

Parameters
[in]send_tThe scalar that is send to all other processors
[in]send_countthe number of scalars send to other processors
[in,out]receive_tA vector of scalars that contain all other scalars from other processors
[in]receive_countthe number of scalars that is received by each processor

Get the unique identifier associated with this processor.

Returns
Returns the processor ID in the communication group

Definition at line 113 of file MpiContainer.cc.

References processorID_.

Referenced by ParticleHandler::addGhostObject(), ParticleHandler::addObject(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), printError(), printFatalError(), printInfo(), printMessage(), DPMBase::printTime(), DPMBase::synchroniseParticle(), and PeriodicBoundaryHandler::updateParticleStatus().

114 {
115  return processorID_;
116 }
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
void MPIContainer::initialiseMercuryMPITypes ( const SpeciesHandler speciesHandler)

Creates the MPI types required for communication of Mercury data through the MPI interface.

Definition at line 74 of file MpiContainer.cc.

References createMercuryMPIType(), BaseInteraction::createMPIType(), BaseSpecies::deleteEmptyInteraction(), FORCE, BaseSpecies::getEmptyInteraction(), BaseHandler< T >::getObject(), PARTICLE, POSITION, and VELOCITY.

Referenced by DPMBase::decompose().

75 {
76 #ifdef MERCURY_USE_MPI
77  //Note: Important that the MPI type creation is done in the order given by the enum
78  MPIParticle dummyParticle;
79  MPIParticlePosition dummyPosition;
80  MPIParticleVelocity dummyVelocity;
81  MPIParticleForce dummyForce;
86 
87  //Obtain the correct history force class
88  //NOTE: only works for one type of interaction
89  if (dataTypes_.size() == 4)
90  {
91  //Create a dummy interaction to get a grip on the size of the MPI interaction class
92  const BaseSpecies* species = speciesHandler.getObject(0);
93  BaseInteraction* emptyInteraction = species->getEmptyInteraction();
94  emptyInteraction->createMPIType();
95  species->deleteEmptyInteraction(emptyInteraction);
96  }
97 #endif
98 }
virtual void deleteEmptyInteraction(BaseInteraction *interaction) const =0
BaseSpecies is the class from which all other species are derived.
Definition: BaseSpecies.h:49
Data class to send a particle velocity over MPI.
Definition: MpiDataClass.h:102
virtual void createMPIType()
Stores information about interactions between two interactable objects; often particles but could be ...
T * getObject(const unsigned int id)
Gets a pointer to the Object at the specified index in the BaseHandler.
Definition: BaseHandler.h:613
Data class to send a particle over MPI.
Definition: MpiDataClass.h:81
virtual BaseInteraction * getEmptyInteraction() const =0
Data class to send a particle position over MPI.
Definition: MpiDataClass.h:89
void createMercuryMPIType(T t, MercuryMPIType type)
Get the communicator used for MPI commands.
Definition: MpiContainer.h:564
Data class to send a particle force over MPI.
Definition: MpiDataClass.h:113
static MPIContainer& MPIContainer::Instance ( )
inlinestatic

fetch the instance to be used for communication

Returns
The only instance of this class

Definition at line 134 of file MpiContainer.h.

Referenced by ParticleHandler::addGhostObject(), Domain::addNewParticles(), ParticleHandler::addObject(), Domain::addParticle(), RandomClusterInsertionBoundary::checkBoundaryBeforeTimeStep(), InsertionBoundary::checkBoundaryBeforeTimeStep(), DPMBase::checkParticleForInteraction(), CircularPeriodicBoundary::CircularPeriodicBoundary(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::communicateTargetDomains(), AngledPeriodicBoundary::copy(), SubcriticalMaserBoundaryTEST::copyExtraParticles(), Interaction< NormalForceInteraction, FrictionForceInteraction, AdhesiveForceInteraction >::createMPIType(), DPMBase::decompose(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), SubcriticalMaserBoundaryTEST::extendBottom(), ParticleHandler::getKineticEnergy(), ParticleHandler::getLargestInteractionRadius(), ParticleHandler::getMass(), ParticleHandler::getMassTimesPosition(), ParticleHandler::getMeanRadius(), getMPISum(), ParticleHandler::getNumberOfFixedObjects(), ParticleHandler::getNumberOfRealObjects(), ParticleHandler::getNumberOfRealObjectsLocal(), ParticleHandler::getParticleAttribute(), ParticleHandler::getRotationalEnergy(), ParticleHandler::getSmallestInteractionRadius(), ParticleSpecies::getSmallestParticleMass(), ParticleHandler::getVolume(), MercuryBase::hGridInfo(), initialiseMPI(), LeesEdwardsBoundary::LeesEdwardsBoundary(), DPMBase::mpiIsInCommunicationZone(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::prepareNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), printError(), printFatalError(), printInfo(), printMessage(), DPMBase::printTime(), RNG::randomise(), ParticleHandler::removeObject(), Domain::sendAndReceiveCount(), Domain::sendAndReceiveMPIData(), DPMBase::synchroniseParticle(), PeriodicBoundaryHandler::updateParticles(), PeriodicBoundaryHandler::updateParticleStatus(), Domain::updateStatus(), Domain::updateVelocity(), and InteractionHandler::write().

135  {
136  static MPIContainer theInstance;
137  return theInstance;
138  }
This class contains all information and functions required for communication between processors...
Definition: MpiContainer.h:129
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::receive ( T &  t,
int  from,
int  tag 
)
inline

asynchronously receive a scalar from some other processor.

Parameters
[in,out]tthe data
[in]fromthe processor that sends the information
[in]tagan identifier for this specific receive request. This must be unique among all receive requests between the previous synchronisation step and the next one. Exactly one send request must also provide this tag and it must be done on the process specified by the 'from' parameter

Definition at line 221 of file MpiContainer.h.

References FATAL, logger, and processorID_.

Referenced by ParticleHandler::addGhostObject(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::sendAndReceiveCount(), and Domain::sendAndReceiveMPIData().

222  {
223 #if MERCURY_ASSERTS
224  if (from == processorID_)
225  {
226  logger(FATAL, "[MPI FATAL]: Receiving data from self!");
227  }
228 #endif
229 #ifdef MERCURY_USE_MPI
230  MPI_Request request;
231  MPI_Irecv(&t, 1, Detail::toMPIType(t), from, tag, communicator_, &request);
232  pending_.push_back(request);
233 
234 #endif
235  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::receive ( T *  t,
int  count,
int  from,
int  tag 
)
inline
Todo:
MX: type documentation. this is used to receive vectors of scalars accross

Definition at line 240 of file MpiContainer.h.

References FATAL, logger, processorID_, and WARN.

241  {
242 #if MERCURY_ASSERTS
243  if (from == processorID_)
244  {
245  logger(FATAL, "[MPI FATAL]: Receiving data fromself!");
246  }
247 
248  if (count == 0)
249  {
250  logger(WARN, "[MPI ERROR]: Receiving zero data");
251  }
252 #endif
253 #ifdef MERCURY_USE_MPI
254  MPI_Request request;
255  MPI_Irecv(&t, count, Detail::toMPIType(*t), from, tag, communicator_, &request);
256  pending_.push_back(request);
257 
258 #endif
259  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
void MPIContainer::receive ( T *  t,
MercuryMPIType  type,
int  count,
int  from,
int  tag 
)
inline

asynchronously receive a list of MercuryMPIType objects from some other processor.

Parameters
[in,out]tthe data, list of MercuryMPIType objects
[in]typethe MPIType that is being received, for instance an MPIParticle
[in]countthe number of objects that are being send
[in]fromthe processor that sends the information
[in]taga unique identifier that corresponds with a send command by the sending processor

Definition at line 303 of file MpiContainer.h.

References FATAL, logger, processorID_, and WARN.

304  {
305  //std::cout << "[Process: " << processorID_ << "]: QUEUING RECEIVE REQUEST with tag: " << tag << std::endl;
306 #if MERCURY_ASSERTS
307  if (from == processorID_)
308  {
309  logger(FATAL, "[MPI FATAL]: Receiving data to self!");
310  }
311 
312  if (count == 0)
313  {
314  logger(WARN, "[MPI ERROR]: Receiving zero data");
315  }
316 #endif
317 #ifdef MERCURY_USE_MPI
318  MPI_Request request;
319  MPI_Irecv(t, count, dataTypes_[type], from, tag, communicator_, &request);
320  pending_.push_back(request);
321 
322 #endif
323  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::send ( T &  t,
int  to,
int  tag 
)
inline

Asynchronously send a scalar to some other processor.

Parameters
[in]tthe data
[in]tothe processor to recieve the information
[in]tagan identifier for this specific send request. This must be unique among all send requests between the previous synchronisation step and the next one. Exactly one recieve request must also provide this tag and it must be done on the process specified by the 'to' parameter

Definition at line 171 of file MpiContainer.h.

References FATAL, logger, and processorID_.

Referenced by ParticleHandler::addGhostObject(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::sendAndReceiveCount(), and Domain::sendAndReceiveMPIData().

172  {
173 #if MERCURY_ASSERTS
174  if (to == processorID_)
175  {
176  logger(FATAL, "[MPI FATAL]: Sending data to self!");
177  }
178 #endif
179 #ifdef MERCURY_USE_MPI
180  MPI_Request request;
181  MPI_Isend(&t, 1, Detail::toMPIType(t), to, tag, communicator_, &request);
182  pending_.push_back(request);
183 
184 #endif
185  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::send ( T *  t,
int  count,
int  to,
int  tag 
)
inline
Todo:
MX: type documentation. This one is used to send vectors of scalars across (hence the *t)

Definition at line 190 of file MpiContainer.h.

References FATAL, logger, processorID_, and WARN.

191  {
192 #if MERCURY_ASSERTS
193  if (to == processorID_)
194  {
195  logger(FATAL, "[MPI FATAL]: Sending data to self!");
196  }
197 
198  if (count == 0)
199  {
200  logger(WARN, "[MPI ERROR]: Sending zero data");
201  }
202 #endif
203 #ifdef MERCURY_USE_MPI
204  MPI_Request request;
205  MPI_Isend(t, count, Detail::toMPIType(*t), to, tag, communicator_, &request);
206  pending_.push_back(request);
207 
208 #endif
209  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
template<typename T >
void MPIContainer::send ( T *  t,
MercuryMPIType  type,
int  count,
int  to,
int  tag 
)
inline

asynchronously send a list of MercuryMPITypes objects to some other processor.

Parameters
[in,out]tthe data, list of MPIType objects
[in]typethe MPIType that is being send, for instance an MPIParticle
[in]countthe number of objects that are being send
[in]tothe processor that the data is send to
[in]taga unique identifier that corresponds with a receive command by the receiving processor

Definition at line 271 of file MpiContainer.h.

References FATAL, logger, processorID_, and WARN.

272  {
273  //std::cout << "[Process: " << processorID_ << "]: QUEUING SEND REQUEST with tag: " << tag << std::endl;
274 #if MERCURY_ASSERTS
275  if (to == processorID_)
276  {
277  logger(FATAL, "[MPI FATAL]: Sending data to self!");
278  }
279 
280  if (count == 0)
281  {
282  logger(WARN, "[MPI ERROR]: Sending zero data");
283  }
284 #endif
285 #ifdef MERCURY_USE_MPI
286  MPI_Request request;
287  MPI_Isend(t, count, dataTypes_[type], to, tag, communicator_, &request);
288  pending_.push_back(request);
289 
290 
291 #endif
292  }
Logger< MERCURY_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here...
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:604
void MPIContainer::sync ( )
inline

Process all pending asynchronous communication requests before continuing.

Each processor can commit a send and receive request at any time when is is finished with computing. At some point these requests have to be resolved in order to keep order in the simulation. After all requests are resolved, a barrier ensures that all processors wait untill all requests are resolved.

Definition at line 152 of file MpiContainer.h.

Referenced by ParticleHandler::addGhostObject(), Domain::addNewParticles(), Domain::addParticle(), DPMBase::decompose(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::prepareNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::updateStatus(), and Domain::updateVelocity().

153  {
154 #ifdef MERCURY_USE_MPI
155  MPI_Waitall(pending_.size(),pending_.data(),MPI_STATUSES_IGNORE);
156  pending_.clear();
157  MPI_Barrier(communicator_);
158 #endif
159  }

Member Data Documentation

int MPIContainer::numberOfProcessors_
private

The total number of processors in the communicator.

Definition at line 609 of file MpiContainer.h.

Referenced by getNumberOfProcessors(), and MPIContainer().

int MPIContainer::processorID_
private

The ID of the processor this class is running on.

Definition at line 604 of file MpiContainer.h.

Referenced by directReceive(), directSend(), getProcessorID(), MPIContainer(), receive(), and send().


The documentation for this class was generated from the following files: