|
This class contains all information and functions required for communication between processors. More...
#include <MpiContainer.h>
Public Member Functions | |
void | initialiseMercuryMPITypes (const SpeciesHandler &speciesHandler) |
Creates the MPI types required for communication of Mercury data through the MPI interface. More... | |
void | sync () |
Process all pending asynchronous communication requests before continuing. More... | |
template<typename T > | |
std::enable_if< std::is_scalar< T >::value, void >::type | send (T &t, int to, int tag) |
Asynchronously send a scalar to some other processor. More... | |
template<typename T > | |
std::enable_if< std::is_scalar< T >::value, void >::type | send (T *t, int count, int to, int tag) |
template<typename T > | |
std::enable_if< std::is_scalar< T >::value, void >::type | receive (T &t, int from, int tag) |
asynchronously receive a scalar from some other processor. More... | |
template<typename T > | |
std::enable_if< std::is_scalar< T >::value, void >::type | receive (T *t, int count, int from, int tag) |
template<typename T > | |
void | send (T *t, MercuryMPIType type, int count, int to, int tag) |
asynchronously send a list of MercuryMPITypes objects to some other processor. More... | |
template<typename T > | |
void | receive (T *t, MercuryMPIType type, int count, int from, int tag) |
asynchronously receive a list of MercuryMPIType objects from some other processor. More... | |
template<typename T > | |
std::enable_if< std::is_scalar< T >::value, void >::type | directSend (T &t, int count, int to, int tag) |
synchronously send a list of scalars to another processor. the data should be received directly or the program will stall More... | |
template<typename T > | |
void | directSend (T *t, MercuryMPIType type, int count, int to, int tag) |
template<typename T > | |
std::enable_if< std::is_scalar< T >::value, void >::type | directReceive (T &t, int count, int from, int tag) |
synchronously receive a list of scalars from another processor. if the send command has not been issued, this function will stall the program More... | |
template<typename T > | |
void | directReceive (T *t, MercuryMPIType type, int count, int from, int tag) |
template<typename T > | |
void | gather (T &send_t, T *receive_t) |
Gathers a scaler from all processors to a vector of scalars on the root. More... | |
template<typename T > | |
std::enable_if< std::is_scalar< T >::value, void >::type | broadcast (T &t, int fromProcessor=0) |
Broadcasts a scalar from the root to all other processors. More... | |
template<typename T > | |
std::enable_if< std::is_scalar< T >::value, void >::type | broadcast (T *t, int size, int fromProcessor) |
Broadcasts a scalar from the root to all other processors. More... | |
template<typename T > | |
void | broadcast (T *t, MercuryMPIType type, int fromProcessor=0) |
Broadcasts an MercuryMPIType to all other processors. More... | |
std::size_t | getProcessorID () |
Reduces a scalar on all processors to one scalar on a target processor. More... | |
std::size_t | getNumberOfProcessors () const |
Get the total number of processors participating in this simulation. More... | |
template<typename T > | |
void | createMercuryMPIType (T t, MercuryMPIType type) |
Get the communicator used for MPI commands. More... | |
void | deleteMercuryMPITypes () |
Deletes the MercuryMPITypes. More... | |
MPIContainer (const MPIContainer &orig)=delete | |
Copy constructor is disabled, to enforce a singleton pattern. More... | |
Static Public Member Functions | |
static MPIContainer & | Instance () |
fetch the instance to be used for communication More... | |
Private Member Functions | |
MPIContainer () | |
Constructor. More... | |
Private Attributes | |
int | processorID_ |
The ID of the processor this class is running on. More... | |
int | numberOfProcessors_ |
The total number of processors in the communicator. More... | |
This class contains all information and functions required for communication between processors.
This class contains of a newly created MPI_COMM communicator used communicate between processors. The reason that MPI_COMM_WORLD is not used, is that other libraries might also be communicating in parallel and you don't want to interfere with these communications. It is a signleton pattern, restricting to one instance of the class. The class furthermore keeps track of the number of processors, processorID's and of asynchronous send/receive requests.
|
delete |
Copy constructor is disabled, to enforce a singleton pattern.
|
private |
Constructor.
Initialise the communicator after MPI has been initalised.
References FATAL, logger, numberOfProcessors_, and processorID_.
|
inline |
Broadcasts a scalar from the root to all other processors.
[in,out] | t | scalar data that is being send by the root and received by the processors |
Referenced by ParticleHandler::addObject(), InsertionBoundary::checkBoundaryBeforeTimeStep(), RandomClusterInsertionBoundary::checkBoundaryBeforeTimeStep(), DPMBase::checkParticleForInteraction(), PeriodicBoundaryHandler::communicateTargetDomains(), SubcriticalMaserBoundaryTEST::copyExtraParticles(), SubcriticalMaserBoundaryTEST::extendBottom(), ParticleHandler::getNumberOfRealObjects(), DPMBase::mpiIsInCommunicationZone(), RNG::randomise(), DPMBase::synchroniseParticle(), and PeriodicBoundaryHandler::updateParticleStatus().
|
inline |
|
inline |
|
inline |
Get the communicator used for MPI commands.
Creates the MPI types telling the MPI interface how each data object looks like
NOTE: The current manner of creating MPI data types might not be compatible when computing on different computers with different architecture and compilers. The "padding" which different compilers and computers add to the class might be different which has a huge effect on the mpi data type because it needs to be consistent over all computers. To resolve this, one can easily create a consistent data type for all types required. I wish goodluck to the person that needs it, because it is a lot of type work and for now I am not doing it. /MX
Referenced by Interaction< NormalForceInteraction, FrictionForceInteraction, AdhesiveForceInteraction >::createMPIType(), and initialiseMercuryMPITypes().
|
inline |
|
inline |
synchronously receive a list of scalars from another processor. if the send command has not been issued, this function will stall the program
[in,out] | t | the data, list of scalars |
[in] | count | the number of scalars to be send |
[in] | from | the processor that sends the information |
[in] | tag | a unique identifier that corresponds with a send command by the sending processor |
References FATAL, logger, processorID_, and WARN.
|
inline |
References FATAL, logger, processorID_, and WARN.
|
inline |
synchronously send a list of scalars to another processor. the data should be received directly or the program will stall
[in,out] | t | the data, list of scalars |
[in] | count | the number of scalars to be send |
[in] | to | the processor that the data is being send to |
[in] | tag | a unique identifier that corresponds with a receive command by the receiving processor |
References FATAL, logger, processorID_, and WARN.
|
inline |
References FATAL, logger, processorID_, and WARN.
|
inline |
Gathers a scaler from all processors to a vector of scalars on the root.
When a single processor needs to know a certain value on all processors, this function will gathers them together at the root processor in an array of the size of the communicator size
[in,out] | send_t | the data that is being send, scalar value |
[in,out] | receive_t | the data that is being received by the root, an array of scalars |
Referenced by DPMBase::checkParticleForInteraction(), and DPMBase::mpiIsInCommunicationZone().
std::size_t MPIContainer::getNumberOfProcessors | ( | ) | const |
Get the total number of processors participating in this simulation.
Obtain the number of processors.
References numberOfProcessors_.
Referenced by CircularPeriodicBoundary::CircularPeriodicBoundary(), AngledPeriodicBoundary::copy(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), ParticleHandler::getNumberOfRealObjectsLocal(), MercuryBase::hGridInfo(), LeesEdwardsBoundary::LeesEdwardsBoundary(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), ParticleHandler::removeObject(), TimeDependentPeriodicBoundary::TimeDependentPeriodicBoundary(), and PeriodicBoundaryHandler::updateParticleStatus().
std::size_t MPIContainer::getProcessorID | ( | ) |
Reduces a scalar on all processors to one scalar on a target processor.
Obtain the current processor ID.
A scalar defined on all processors is reduced to one number by an operation examples of operations are MPI::MAX, MPI::MIN and MPI::SUM, giving the maximum, minumum or the summation of the given scalars. The resulting reduced scalar is then sent to the target processor id, generally this is the root processor 0.
[in,out] | t | Scalar that needs to be collected and reduced to one scalar |
[in] | operation | Operation that is performed on the collected scalars |
[in] | id | Optional input, receiving processor of the reduced scalar |
AllReduces a scalar on all processors by a given MPI operation
A local scalar on all processors is reduced to one scalar as output the reduction follows the operation rule given.
[in] | send_t | the scalar that is send to be reduced |
[out] | receive_t | the reduced scalar |
[in] | operation | The operation that is performed on all local numbers to reduce it to a single number |
allGather takes a (different) scalar from all processors and returns a vector with all scalars
sometimes it is required to share a local scalar such as number of particles to all other processors. With allGather all processors now now the local scalar of all other processors
[in] | send_t | The scalar that is send to all other processors |
[in] | send_count | the number of scalars send to other processors |
[in,out] | receive_t | A vector of scalars that contain all other scalars from other processors |
[in] | receive_count | the number of scalars that is received by each processor |
Get the unique identifier associated with this processor.
References processorID_.
Referenced by ParticleHandler::addGhostObject(), ParticleHandler::addObject(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), printError(), printFatalError(), printInfo(), DPMBase::printTime(), printWarn(), and DPMBase::synchroniseParticle().
void MPIContainer::initialiseMercuryMPITypes | ( | const SpeciesHandler & | speciesHandler | ) |
Creates the MPI types required for communication of Mercury data through the MPI interface.
References createMercuryMPIType(), BaseInteraction::createMPIType(), BaseSpecies::deleteEmptyInteraction(), FORCE, BaseSpecies::getEmptyInteraction(), BaseHandler< T >::getObject(), PARTICLE, POSITION, and VELOCITY.
Referenced by DPMBase::decompose().
|
inlinestatic |
fetch the instance to be used for communication
Referenced by ParticleHandler::addGhostObject(), Domain::addNewParticles(), ParticleHandler::addObject(), Domain::addParticle(), InsertionBoundary::checkBoundaryBeforeTimeStep(), RandomClusterInsertionBoundary::checkBoundaryBeforeTimeStep(), DPMBase::checkParticleForInteraction(), MercuryBase::checkParticleForInteraction(), CircularPeriodicBoundary::CircularPeriodicBoundary(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::communicateTargetDomains(), AngledPeriodicBoundary::copy(), SubcriticalMaserBoundaryTEST::copyExtraParticles(), Interaction< NormalForceInteraction, FrictionForceInteraction, AdhesiveForceInteraction >::createMPIType(), DPMBase::decompose(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), SubcriticalMaserBoundaryTEST::extendBottom(), ParticleHandler::getKineticEnergy(), ParticleHandler::getLargestInteractionRadius(), ParticleHandler::getMass(), ParticleHandler::getMassTimesPosition(), ParticleHandler::getMeanRadius(), getMPISum(), ParticleHandler::getNumberOfFixedObjects(), ParticleHandler::getNumberOfRealObjects(), ParticleHandler::getNumberOfRealObjectsLocal(), ParticleHandler::getParticleAttribute(), ParticleHandler::getRotationalEnergy(), ParticleHandler::getSmallestInteractionRadius(), ParticleSpecies::getSmallestParticleMass(), ParticleHandler::getVolume(), MercuryBase::hGridInfo(), initialiseMPI(), LeesEdwardsBoundary::LeesEdwardsBoundary(), DPMBase::mpiIsInCommunicationZone(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::prepareNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), printError(), printFatalError(), printInfo(), DPMBase::printTime(), printWarn(), RNG::randomise(), ParticleHandler::removeObject(), Domain::sendAndReceiveCount(), Domain::sendAndReceiveMPIData(), DPMBase::synchroniseParticle(), TimeDependentPeriodicBoundary::TimeDependentPeriodicBoundary(), PeriodicBoundaryHandler::updateParticleStatus(), Domain::updateStatus(), Domain::updateVelocity(), and InteractionHandler::write().
|
inline |
asynchronously receive a scalar from some other processor.
[in,out] | t | the data |
[in] | from | the processor that sends the information |
[in] | tag | an identifier for this specific receive request. This must be unique among all receive requests between the previous synchronisation step and the next one. Exactly one send request must also provide this tag and it must be done on the process specified by the 'from' parameter |
References FATAL, logger, and processorID_.
Referenced by ParticleHandler::addGhostObject(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::sendAndReceiveCount(), and Domain::sendAndReceiveMPIData().
|
inline |
References FATAL, logger, processorID_, and WARN.
|
inline |
asynchronously receive a list of MercuryMPIType objects from some other processor.
[in,out] | t | the data, list of MercuryMPIType objects |
[in] | type | the MPIType that is being received, for instance an MPIParticle |
[in] | count | the number of objects that are being send |
[in] | from | the processor that sends the information |
[in] | tag | a unique identifier that corresponds with a send command by the sending processor |
References FATAL, logger, processorID_, and WARN.
|
inline |
Asynchronously send a scalar to some other processor.
[in] | t | the data |
[in] | to | the processor to recieve the information |
[in] | tag | an identifier for this specific send request. This must be unique among all send requests between the previous synchronisation step and the next one. Exactly one recieve request must also provide this tag and it must be done on the process specified by the 'to' parameter |
References FATAL, logger, and processorID_.
Referenced by ParticleHandler::addGhostObject(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::sendAndReceiveCount(), and Domain::sendAndReceiveMPIData().
|
inline |
References FATAL, logger, processorID_, and WARN.
|
inline |
asynchronously send a list of MercuryMPITypes objects to some other processor.
[in,out] | t | the data, list of MPIType objects |
[in] | type | the MPIType that is being send, for instance an MPIParticle |
[in] | count | the number of objects that are being send |
[in] | to | the processor that the data is send to |
[in] | tag | a unique identifier that corresponds with a receive command by the receiving processor |
References FATAL, logger, processorID_, and WARN.
|
inline |
Process all pending asynchronous communication requests before continuing.
Each processor can commit a send and receive request at any time when is is finished with computing. At some point these requests have to be resolved in order to keep order in the simulation. After all requests are resolved, a barrier ensures that all processors wait untill all requests are resolved.
Referenced by ParticleHandler::addGhostObject(), Domain::addNewParticles(), Domain::addParticle(), DPMBase::decompose(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::prepareNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::updateStatus(), and Domain::updateVelocity().
|
private |
The total number of processors in the communicator.
Referenced by getNumberOfProcessors(), and MPIContainer().
|
private |
The ID of the processor this class is running on.
Referenced by directReceive(), directSend(), getProcessorID(), MPIContainer(), receive(), and send().