Spike Timing-dependent Plasticity Curve#

This tutorial uses a single cell and reproduces this Brian2 example. We aim to reproduce a spike timing-dependent plastivity curve which arises from stochastic calcium-based synapse dynamics described in Graupner and Brunel [1].

The synapse is modeled as synaptic efficacy variable, \(\rho\), which is a function of the calcium concentration, \(c(t)\). There are two stable states at \(\rho=0\) (DOWN) and \(\rho=1\) (UP), while \(\rho=\rho^\ast=0.5\) represents a third unstable state between the two stable states. The calcium concentration dynamics are represented by a simplified model which uses a linear sum of individual calcium transients elicited by trains of pre- and postsynaptic action potentials:

\[\begin{split}\begin{align*} c^\prime (t) &= - \frac{1}{\tau_{Ca}}c + C_{pre} \sum_i \delta(t-t_i-D) + C_{post} \sum_j \delta(t-t_j), \\ \rho^\prime(t) &= - \frac{1}{\tau}\left [ \rho (1 - \rho) (\rho^\ast - \rho) -\gamma_p (1-\rho) H\left(c - \theta_p \right) + \gamma_d \rho H\left(c - \theta_d \right) \right ] + N, \\ N &= \frac{\sigma}{\sqrt{\tau}} \sqrt{H\left( c - \theta_p \right) + H\left( c - \theta_d \right)} W. \end{align*}\end{split}\]

Here, the sums over \(i\) and \(j\) represent the contributions from all pre and postsynaptic spikes, respectively, with \(C_{pre}\) and \(C_{pre}\) denoting the jumps in concentration after a spike. The jump after the presynaptic spike is delayed by \(D\). The calcium decay time is assumed to be much faster than the synaptic time scale, \(\tau_{Ca} \ll \tau\). The subscripts \(p\) and \(d\) represent potentiation (increase in synaptic efficacy) and depression (decrease in synaptic efficacy), respectively, with \(\gamma\) and \(\theta\) being the corresponding rates and thresholds. \(H(x)\) is the right-continuous heaviside step function (\(H(0)=1\)).

This mechanism is stochastic, \(W\) represents a white noise process, and therefore our simulation needs to

  • use a stochastic synapse mechanism,

  • accumulate statistics over a large enough ensemble of initial states.

Implementation of a Stochastic Mechanism#

Implementing a stochastic mechanism which is given by a stochastic differential equation (SDE) as above is straightforward to implement in Arbor’s NMODL dialect. Let’s examine the mechanism code in the Arbor repository.

The main difference compared to a deterministic (ODE) description is the additional WHITE_NOISE block,

WHITE_NOISE {
    W
}

which declares the white noise process \(W\), and the specification of the stochastic solver method,

BREAKPOINT {
    SOLVE state METHOD stochastic
}

This is sufficient to inform Arbor about the stochasticity of the mechanism. For more information about Arbor’s strategy to solve SDEs, please consult this overview, while details about the numerical solver can be found in the developers guide.

The Model#

In this tutorial, the neuron model itself is simple with only passive (leaky) membrane dynamics, and it receives regular synaptic current input in one arbitrary chosen control volume (CV) to trigger regular spikes.

First we import some required modules:

import arbor as A
from arbor import units as U
import numpy as np  # You may have to pip install these.

Next we set the simulation parameters in order to reproduce the plasticity curve:

# Spike response delay
D = 13.7 * U.ms
# Spike frequency
f = 1000 * U.Hz
# Number of spike pairs
num_spikes = 30
# time lag resolution
stdp_dt_step = 20.0
# Maximum time lag
stdp_max_dt = 100.0
# Ensemble size per initial value
ensemble_per_rho_0 = 100
# Simulation time step
dt = 0.1 * U.ms
# List of initial values for 2 states; we need a synapse for each sample path
rho_0 = [0] * ensemble_per_rho_0 + [1] * ensemble_per_rho_0
# Time lags between spike pairs (post-pre: < 0, pre-post: > 0)
stdp_dt = np.arange(-stdp_max_dt, stdp_max_dt + stdp_dt_step, stdp_dt_step) * U.ms
# Time between stimuli
T = 1000.0 / f
# Simulation duration

The time lag resolution, together with the maximum time lag, determine the number of cases we want to simulate. For each such case, however, we need to run many simulations in order to get a statistically meaningful result. The number of simulations per case is given by the ensemble size and the initial conditions. In our case, we have two inital states, \(\rho(0)=0\) and \(\rho(0)=1\), and for each initial state we want to run \(100\) simulations. We note, that the stochastic synapse mechanism does not alter the state of the cell, but couples one-way only by reacting to spikes. Therefore, we are allowed to simply place \(100\) synapses per initial state onto the cell without worrying about interference. Moreover, this has the benefit of exposing parallelism that Arbor can take advantage of.

Thus, we create a simple cell and mark the midpoint at which we place our mechanisms:

# (2) Make the cell
# Create a morphology with a single (cylindrical) segment of length=diameter=6 μm
tree = A.segment_tree()
tree.append(A.mnpos, (-3, 0, 0, 3), (3, 0, 0, 3), tag=1)

# Define the soma and its midpoint
labels = A.label_dict({"soma": "(tag 1)", "midpoint": "(location 0 0.5)"})

# Create and set up a decor object
decor = (
    A.decor()
    .set_property(Vm=-40 * U.mV)
    .paint('"soma"', A.density("pas"))
    .place('"midpoint"', A.synapse("expsyn"), "driving_synapse")
    .place('"midpoint"', A.threshold_detector(-10 * U.mV), "detector")
)
for ix, rho in enumerate(rho_0):
    decor.place(
        '"midpoint"',
        A.synapse("calcium_based_synapse", rho_0=rho),
        f"calcium_synapse_{ix}",
    )

# Create cell
cell = A.cable_cell(tree, decor, labels)

Since our stochastic mechanism calcium_based_synapse is not within Arbor’s default set of mechanism, we need to extend the mechanism catalogue within the cable cell properties:


# (3) Recipe
class stdp_recipe(A.recipe):

Our cell and cell properties can then later be used to create a simple recipe:

        self.the_cell = cell
        # create extended catalogue including stochastic mechanisms
        self.the_props = A.neuron_cable_properties()
        self.the_props.catalogue.extend(A.stochastic_catalogue(), "")
        self.time_lags = time_lags
        self.num = len(time_lags)

    def num_cells(self):
        return self.num

    def cell_kind(self, _):
        return A.cell_kind.cable

    def cell_description(self, _):
        return self.the_cell

    def global_properties(self, _):
        return self.the_props

    def probes(self, _):
        return [A.cable_probe_point_state_cell("calcium_based_synapse", "rho", "rho")]

    def event_generators(self, gid):
        # Time difference between post and pre spike including delay
        d = D - self.time_lags[gid]

Note, that the recipe takes a cell and a list of time offsets as constructor arguments, which are used to create the cells of the ensemble. Furthermore, the recipe also returns a list of probes which contains only one item: A query for our mechanism’s state variable \(\rho\). Since we placed a number of these mechanisms on our cell, we will receive a vector of values when probing.

Next we set up the simulation logic:

        t0_pre = max(d.value, 0) * U.ms
        sched_post = A.regular_schedule(t0_post, T, tfinal)
        sched_pre = A.regular_schedule(t0_pre, T, tfinal)

        # Create strong enough driving stimulus
        generators = [A.event_generator("driving_synapse", 1.0, sched_post)]

        # Stimulus for calcium synapses
        for ix, _ in enumerate(rho_0):
            # Zero weight -> just modify synaptic weight via stdp
            generators.append(
                A.event_generator(f"calcium_synapse_{ix}", 0.0, sched_pre)
            )
        return generators


# (4) run simulation for all lags
# Create recipe
rec = stdp_recipe(cell, stdp_dt)

# Create simulation
print(A.config())
sim = A.simulation(rec, seed=42)

# Register probe to read out stdp curve

The pre- and postsynaptic events are generated as regular schedules, where the presynaptic event is shifted in time by \(D -\text{time lag}\) with respect to the presynaptic event, which in turn is generated regularly with the frequency \(f\). The postsynaptic events are driven by the deterministic synapse with weight 1.0, while the presynaptic events are generated at the stochastic calcium synapses. The postsynaptic weight can be set arbitrarily as long as it is large enough to trigger the spikes.

Thus, we have all ingredients to create the recipe

    sim.sample((gid, "rho"), A.explicit_schedule([tfinal - dt]))
    for gid in range(len(stdp_dt))

Now, we need to initialize the simulation, register a probe and run the simulation:

# Run simulation
sim.run(tfinal, dt)


# (5) Process sampled data
# Add reference
ref = np.array(
    [

Since we are interested in the long-term average value, we only query the probe at the end of the simulation.

After the simulation is finished, we calculate the change in synaptic strength by evaluating the transition probabilies from initial DOWN state to final UP state and vice versa.

        [-95, 0.981715028725338],
        [-90, 0.9932274542583821],
        [-85, 0.982392230227282],
        [-80, 0.9620761851689686],
        [-75, 0.9688482001884063],
        [-70, 0.9512409611378684],
        [-65, 0.940405737106768],
        [-60, 0.9329565205853866],
        [-55, 0.9146720800329048],
        [-50, 0.8896156244609853],
        [-45, 0.9024824529979171],
        [-40, 0.8252814817763271],
        [-35, 0.8171550637530018],
        [-30, 0.7656877496052755],
        [-25, 0.7176064429672677],
        [-20, 0.7582385330838939],
        [-15, 0.7981934216985763],
        [-10, 0.8835208109434913],
        [-5, 0.9390513341028807],

We process all configured timelags

        [10, 1.2255075694250952],

The collected results can then be plotted:

../_images/calcium_stdp.svg

Comparison of this simulation with reference simulation [1]; for a simulation duration of 60 spikes at 1 Hertz, ensemble size of 2000 per initial state and time step dt=0.01 ms. The shaded region indicates the 95% confidence interval.#

The full code#

You can find the full code of the example at python/examples/calcium_stdp.py.

References#