Running simulations in parallel

MercuryDPM can run simulations in parallel using MPI. The algorithm uses a simple domain decomposition, splitting the domain into \(n_x \times n_y \times n_z\) subdomains and running each subdomain on a separate processor.

To run simulations in parallel, you need to define all particles in setupInitialConditions and you need to define the domain size in main(). Generally, you should structure your driver code as follows:

#include "Mercury3D.h"
class Demo : public Mercury3D {
void setupInitialConditions() override {
//define walls, boundary conditions, particle positions here
}
};
int main() {
Demo problem;
//define contact law, time step, final time, domain size here
problem.solve();
}
int main(int argc, char *argv[])
Definition: T_protectiveWall.cpp:215
virtual void setupInitialConditions()
This function allows to set the initial conditions for our problem to be solved, by default particle ...
Definition: DPMBase.cc:1998
This adds on the hierarchical grid code for 3D problems.
Definition: Mercury3D.h:37

To run your simulation in parallel, you need to compile the code with MPI. Use cmake and turn the flag MercuryDPM_USE_MPI to ON. You can do this either by loading cmake-gui and changing the use USE_MPI flag to ON; or, alternatively you do it on the command line:

cd MercuryBuild
cmake . -DMERCURYDPM_USE_MPI=ON
#define ON
Definition: CMakeDefinitions.h:50

You also need to tell your program about the decomposition it should use. To split your domain setNumberOfDomains into \(n_x \times n_y \times n_z\), add the following command in your main function, before solve():

//Set the number of domains for parallel decomposition
problem.setNumberOfDomains(Vec3D(nx,ny,nz));
Definition: Vector.h:51

Now compile your code, and run it with mpirun. Make sure you use the correct number of processors ( \(n=n_x \cdot n_y \cdot n_z\)) needed for the domain decomposition:

//Set the number of domains for parallel decomposition
mpirun -np n MyDriverCode
const unsigned n
Definition: CG3DPackingUnitTest.cpp:32

For an example of an MPI-ready code, see Drivers/ParallelDrum/testDrum.cpp.