An Arbor simulation requires a Recipes, a (hardware) context, and a domain decomposition. The Recipe contains the neuroscientific model, the hardware context describes the computational resources you are going to execute the simulation on, and the domain decomposition describes how Arbor will use the hardware. Since the context and domain decomposition may seem closely related at first, it might be instructive to see how recipes are used by Arbor:
- Ion channels
- Connection sites
group-0 simulate for
group-1 simulate for
hardware: 1 GPU
hardware: 12 threads
A domain decomposition describes the distribution of the model over the available computational resources. The description partitions the cells in the model as follows:
group the cells into cell groups of the same kind of cell;
assign each cell group to either a CPU core or GPU on a specific MPI rank.
The number of cells in each cell group depends on different factors, including the type of the cell, and whether the cell group will run on a CPU core or the GPU. The domain decomposition is solely responsible for describing the distribution of cells across cell groups and domains.
The domain decomposition can be built manually by the modeler, or an automatic load balancer can be used.
We define some terms as used in the context of connectivity
List of same-kinded cells that share some information. Must not be split across domains.
Produced by a load_balancer, a list of all cell_groups located on the same hardware. A communicator deals with the full set of cells of one domain.
List of domains; distributed across MPI processes.
A load balancer generates the domain decomposition using the model recipe and a description of the available computational resources on which the model will run described by an execution context. Currently Arbor provides one automatic load balancer and a method for manually listing out the cells per MPI task. More approaches might be added over time.