Skip to main content

Stabilization of Memory States by Stochastic Facilitating Synapses

Abstract

Bistability within a small neural circuit can arise through an appropriate strength ofexcitatory recurrent feedback. The stability of a state of neural activity, measured bythe mean dwelling time before a noise-induced transition to another state, depends on theneural firing-rate curves, the net strength of excitatory feedback, the statistics ofspike times, and increases exponentially with the number of equivalent neurons in thecircuit. Here, we show that such stability is greatly enhanced by synaptic facilitationand reduced by synaptic depression. We take into account the alteration in times ofsynaptic vesicle release, by calculating distributions of inter-release intervals of asynapse, which differ from the distribution of its incoming interspike intervals when thesynapse is dynamic. In particular, release intervals produced by a Poisson spike trainhave a coefficient of variation greater than one when synapses are probabilistic andfacilitating, whereas the coefficient of variation is less than one when synapses aredepressing. However, in spite of the increased variability in postsynaptic input producedby facilitating synapses, their dominant effect is reduced synaptic efficacy at low inputrates compared to high rates, which increases the curvature of neural input-outputfunctions, leading to wider regions of bistability in parameter space and enhancedlifetimes of memory states. Our results are based on analytic methods with approximateformulae and bolstered by simulations of both Poisson processes and of circuits of noisyspiking model neurons.

1 Introduction

Circuits of reciprocally connected neurons have been long considered as a basis for themaintenance of persistent activity [1]. Such persistent neuronal firing that continues for many seconds after atransient input can represent a short-term memory of prior stimuli [2]. Indeed, Hebb’s famous postulate [3] that causally correlated firing of connected neurons could lead to astrengthening of the connection, was based on the suggestion that the correlated firingwould be maintained in a recurrently connected cell assembly beyond the time of a transientstimulus [3]. Since then, analytic and computational models have demonstrated the ability ofsuch recurrent networks to produce multiple discrete attractor states [4], as in Hopfield networks [5, 6], or to be capable of integration over time via a marginally stable network, oftentermed a line attractor [7, 8]. Much of the work on these systems has assumed either static synapses, orconsidered changes in synaptic strength via long-term plasticity occurring on a much slowertimescale than the dynamics of neuronal responses. Here, we add some new results pertainingto the less well-studied effects of short-term plasticity—changes in synaptic strengththat arise on a timescale of seconds, the same timescale as that of persistentactivity—within recurrent discrete attractor networks.

The two forms of short-term synaptic plasticity—facilitation anddepression—affect all synapses of the presynaptic cell according to its train ofaction potentials. Synaptic facilitation refers to a temporary enhancement of synapticefficacy in the few hundreds of milliseconds following each spike, effectively strengtheningconnections to postsynaptic cells as presynaptic firing rate increases. Synaptic depressionis the opposite effect—reduced synaptic efficacy in the few hundreds of millisecondsfollowing a presynaptic spike, effectively weakening connection strengths as presynapticfiring rate increases. The dynamics of these processes (Table 1)also impacts the variability in postsynaptic conductance, in particular when synaptictransmission is treated as a stochastic event. The variability affects informationprocessing via the signal-to-noise ratio [911] and also determines the stability, or robustness, of discrete memory states [12, 13].

Table 1 Stochastic synapse model and parameters. (S) for single-synapse model; (M) for memorymodel calculations, where different

When analyzing the stability of discrete states, we focus on the mean value of andfluctuations within the postsynaptic feedback conductance, since that is the variable with aslow enough time constant to maintain persistent activity in standard models ofnetwork-produced memory states [14, 15]. In our formalism, we rely on fluctuations in this NMDA receptor-mediatedfeedback conductance to be on a slower timescale (100 ms) than the membrane timeconstant, which is short (<10 ms), in part because each cell receives a barrage ofbalanced excitatory and inhibitory inputs. When synapses are dynamic, both the meanpostsynaptic conductance and its fluctuations are altered from the case of static synapses.

Here, we show how a presynaptic Poisson spike train, which produces an exponentialdistribution of interspike intervals (ISIs), produces a distribution of inter-releaseintervals (IRIs) that is not exponential if synapses are either facilitating or depressing.We then consider how the nonexponential distribution of IRIs affects both the mean andstandard deviation of the postsynaptic conductance differently from the exponential,Poisson, distribution of IRIs. These results affect the calculation of stability of memorystates, yielding differences in the parameter ranges where bistability exists and producinglarge changes in the spontaneous transition times between states, which limit theirstability.

A two-state memory system is limited by the lifetime of the less stable state [16]. For a given system, one can typically vary any parameter so as to enhance thelifetime of one state while reducing the lifetime of the other state. If we define thesystem’s stability as the lifetime of the less stable state, then the optimalstability of a system arises when the lifetimes of the two states are equal. In this paper,for a given system, defined by the neural firing-rate curve and type of synapse, weparametrically scale the total feedback connection strength to determine the system’soptimal stability. In so doing, we find that optimal stability of bistable neural circuitsis enhanced by synaptic facilitation.

2 Statistics of Synaptic Transmission Through Probabilistic Dynamic Synapses

In the following, we assume that synaptic facilitation and depression operate by modifyingthe release probability of presynaptic vesicles. Following vesicle release, neurotransmitterbinds to receptors in the postsynaptic terminal. The fraction of receptors bound at any onetime determines the fraction of open channels, known as the gating variable, s,which is proportional to the conductance producing current flow into the postsynaptic cell.The dependence of s on presynaptic firing is affected by the dynamic properties ofthe intervening synapse. In particular, the distribution of intervals between vesiclerelease events is not identical to the interspike interval (ISI) distribution: facilitatingsynapses increase the likelihood of short inter-release intervals (IRIs) compared to longintervals, so increase the coefficient of variation (CV); whereas depressing synapses makeshort release intervals unlikely and produce a more regular sequence of release intervals,reducing the CV (Fig. 1). While the means of these distributionscan be calculated by standard methods [17], it is valuable to know the full distribution, since changes in the CV of IRIsaffect the variability of the postsynaptic conductance, and thus alter properties likesignal-to-noise ratio and the stability of memory states to noise fluctuations.

Fig. 1
figure 1

Dynamic synapses alter the distribution of interrelease intervals (IRI) of one vesicleproduced by a Poisson spike train. a1a2 Histogram of IRIs forstatic synapses is exponential ( p 0 =0.5). b1b2 Histogram of IRIs for afacilitating synapse is more sharply peaked than exponential (with a CV greater thanone) ( p 0 =0.1, f F =0.5, τ F =500 ms). c1c2 Histogram of IRIs for aprobabilistic depressing synapse has a dip at low intervals (producing a CV less thanone) ( p 0 =0.5, τ D =250 ms). a1, b1, c1 Presynaptic Poisson spiketrain of 2 Hz. a2, b2, c2 Presynaptic Poisson spike train of50 Hz

2.1 Distribution of Release Times for a Poisson Spike Train Through StochasticDepressing Synapses

The distribution of release times of a vesicle for depressing synapses with a singlerelease site is simpler to calculate than that for facilitating synapses, because whenconsidering synaptic depression alone, the probability of release from a single sitesimply depends on the time since last release of a vesicle from that site. Therefore, wewill solve for depressing synapses before moving to the case of facilitating synapses,where the probability of release depends on the number of intervening spikes. Thesubsequent result for facilitating synapses will prove to be more biologically relevant,as synapses typically contain multiple releasable vesicles, so it is only in the casewhere the baseline release probability is low—in which case facilitationdominates—that failure of release is common enough to affect the distribution ofrelease times. The case of probabilistic release in depressing synapses with multiplerelease sites is more complex, though the first two moments of the IRI probabilitydistribution have been calculated by others [18].

Synaptic depression arises because of the time needed to recycle and replenish vesiclesfollowing release of neurotransmitter. Synaptic depression can be treated stochastically [19] by assuming vesicle recovery is a Poisson process, with the likelihood of avesicle being release-ready, or “docked,” as P(V=1)= P D =1 e T / τ D , where T is the time since the prior vesiclerelease. Thus, the distribution of inter-release intervals (IRIs) can be calculated byrequiring that a vesicle be docked within the interval and then adding the time for aspike to appear after the vesicle is docked. We assume a docked vesicle has a releaseprobability of p 0 and incoming spikes arrive as a Poisson process of rater. Since probability of docking between time T D and T D +δ T D is δ T D (d P D /dT) evaluated at T D , or

P( T D )δ T D =δ T D e T / τ D / τ D
(1)

and the probability of the first spike after time T D being at time T and causing release as e p 0 r ( T T D ) , we have

P(T)= 0 T e T D / τ D τ D e p 0 r ( T T D ) d T D
(2)

so

P(T)= p 0 r p 0 r τ D 1 ( e T / τ D e p 0 r T )
(3)

which leads to a mean IRI of

T= τ D + 1 p 0 r .
(4)

The reduction in probability of small IRIs is a simple example of the temporal filteringof information presented by others [18]. The addition of the extra probabilistic process of vesicle recovery, whichunderlies synaptic depression, causes IRIs to be more regular, as evidenced in Eqs.(2)–(3) by a reduced coefficient of variation (CV) of IRIs from the Poisson value of1:

CV= σ T T = 1 + ( p 0 r τ D ) 2 1 + p 0 r τ D ,
(5)

which has a minimum value of 1/ 2 at p 0 r τ D =1 and a maximum value of 1 as r τ D 0 or r τ D . For example, in the curves shown in Figs. 1c1–1c2, CV=0.82 at 2 Hz and 0.87 at 50 Hz.

2.2 Distribution of Release Times for a Poisson Spike Train Through StochasticFacilitating Synapses

For facilitating synapses, we take the following form for release probability, P rel (t)= p 0 F(t), where between presynaptic spikes:

τ F d F d t =(1F).
(6)

Each spike produces an increase in F from F (which determines the release probability of that spike) to F + (which is the new release probability for an immediate,subsequent spike) such that

F + = F + f F ( 1 p 0 F ) = F + f F ( F max F ) ,
(7)

where f F is the facilitation factor, taking a value between 0 and 1,indicating the fractional increase from the pre-spike release probability toward asaturating release probability of p 0 F=1.

To calculate the distribution of interrelease intervals (IRIs) we need to calculate theprobability of release as a function of time, following a prior release. Althoughpresynaptic spikes arrive with constant probability per unit time in a Poisson process,vesicle release occurs more often when the facilitation variable is high. Thus,immediately after release, the likelihood of release is greater than on average, becausethe facilitation variable takes some time (on the order of τ F ) to return to a baseline value. Furthermore, when calculatingthe IRI distribution, we must be aware that F R (r), which is the mean value approached by F conditionedon no intervening release event will be lower than the mean value, F , since long IRIs are more associated with time windows offewer intervening presynaptic spikes than chance.

To proceed, we first calculate F R + (r), the mean of the facilitation variable immediately aftervesicle release. To arrive at this quantity, we use the mean value of the facilitationvariable averaged across all presynaptic spikes [17]:

F = [ 1 + r τ F f F / p 0 ] 1 + f F r τ F
(8)

and the variance of this quantity [9]:

σ F 2 = r τ F f F 2 ( 1 / p 0 1 ) 2 ( 1 + r τ F f F ) 2 [ 2 + r τ F f F ( 2 f F ) ] .
(9)

Together, these can be used to calculate F R (r), which is the mean value of the facilitation variable justprior to firing when averaged across only those spikes that actually cause release, sincerelease probability is proportional to F . The latter averaging produces a higher value than F , since higher instances of F are more likely to result in release, so weight the averagemore than lower values:

P ( F R ) = F P ( F ) 0 F P ( F ) d F = F P ( F ) F ,
(10)

where P( F ) is the probability that F takes the value F immediately prior to an incoming spike and the denominatornormalizes the distribution. Hence,

F R ( r ) = 0 ( F ) 2 P ( F ) F = F + σ F 2 F .
(11)

From this, the mean value of the facilitation variable immediately following vesiclerelease can be calculated as

F R + ( r ) = F R ( r ) (1 f F )+ f F p 0 = ( F + σ F 2 F ) (1 f F )+ f F p 0 .
(12)

The above formula is exact and was matched by simulated data at all values of rsimulated (data not shown).

To estimate the steady state value of F a long time from any priorrelease—a steady state that may never be reached if the product of firing rate andbase release probability is much higher than 1/ τ F —we solve a self-consistency equation for this value, F R (r) and ignore fluctuations by assuming release probability is p 0 F R (r) for each presynaptic spike. One can calculate then theprobability of N spikes in a given interval, T, conditioned on therequirement that none of those spikes caused vesicle release, while the facilitationvariable is at its mean steady state value of F R (r). The result is:

P ( N , T No release ) = P ( N , T ) ( 1 p 0 F R ( r ) ) N N = 0 P ( N , T ) ( 1 p 0 F R ( r ) ) N = e r T ( r T ) N ( 1 p 0 F R ( r ) ) N / N ! N = 0 e r T ( r T ) N ( 1 p 0 F R ( r ) ) N / N ! = e r T ( 1 p 0 F R ( r ) ) [ r T ( 1 p 0 F R ( r ) ) ] N / N !
(13)

which is the result for a Poisson process of modified rate, r(1 p 0 F R (r)). This allows us to self-consistently calculate F R (r) by using the result for the mean value of the facilitationvariable given such a modified spike rate, such that

F R ( r ) 1 + r ( 1 p 0 F R ( r ) ) τ F f F / p 0 1 + r ( 1 p 0 F R ( r ) ) τ F f F ,
(14)

which can be solved using the quadratic formula to give

F R ( r ) 2 r τ F f F + 1 ( 2 r τ F f F + 1 ) 2 4 p 0 r τ F f F 2 p 0 r τ F f F ,
(15)

a value which is always below F and in close agreement with simulated data (not shown).

Finally, to fit the IRI distribution, we assumed exponential decay from F R + (r) to F R (r) with a time constant such that the initial slope (when theprobability of any intervening spikes is zero) matches that of an exponential decay to 1with time constant τ F (the initial rate of decrease of F in the absence ofintervening spikes). That is, we take the release to follow an inhomogeneous Poissonprocess with a rate, which depends on time, T, since the prior release event,given by

r R (T)= p 0 r { F R ( r ) + [ F R + ( r ) F R ( r ) ] e T / τ eff ( r ) } ,
(16)

where

τ eff (r)= τ F F R + ( r ) F R ( r ) F R + ( r ) 1 .
(17)

The distribution of IRIs is then given by [17]

P(T)= r R (T)exp [ 0 T r R ( t ) d t ]
(18)

a function, which is plotted in Figs. 1b1–1b2, where it is indistinguishable from the simulated data. Similarly,indistinguishable is the cumulative IRI distribution plotted in Figs. 2b1–2b2, justifying the approximations that led toour results.

Fig. 2
figure 2

Cumulative distribution of inter-release intervals (IRIs) for the same curves shownin Fig. 1, verifying the remarkable agreement of theapproximation used in Fig. 1b with the simulated data.a1a2 Static synapses, p 0 =0.5. b1b2 Facilitating synapses, p 0 =0.1, f F =0.5, τ F =500 ms. c1c2 depressing synapses, p 0 =0.5, τ D =250 ms. a1, b1, c1 Presynaptic Poissonspike train of 2 Hz. a2, b2, c2 Presynaptic Poisson spiketrain of 50 Hz

Finally, it should be noted that when synapses are facilitating, consecutive IRIs arecorrelated. For example, when the presynaptic rate is 2 Hz in the simulation used toproduce Figs. 1b1 and 2b1, thecorrelation between one IRI and the subsequent one is 0.028, while with a presynaptic rateof 50 Hz the correlation is 0.015. Such a correlation, which cannot be obtained fromthe IRI distribution alone, further increases any variability in postsynaptic conductance,above and beyond the increase due to the altered shape of the IRI distribution.

In summary, the main difference produced by facilitation from the exponentialdistribution of inter-spike intervals (which is retrieved by setting either f F or τ F to zero) is an enhancement of probability at low Tand a corresponding reduction at high T. These changes produce a CV of IRIsgreater than 1 (CV=1.18 at 5 Hz and CV=1.03 at 50 Hz in the examples shown in Figs. 1b1–1b2) enhancing the noise in any neuralsystem.

2.3 Mean Synaptic Transmission via Dynamic Synapses

We assume that at the time of vesicle release the postsynaptic conductance increases in astep-wise manner, with a fraction, α ˜ , of previously closed channels becoming opened. This causesthe synaptic gating variable, s, to increase from its prior value, s to s + according to s + = s + α ˜ (1 s ). It then decays between release events with time constant, τ s , according to τ s d s d t =s.

If one assumes that successive inter-release intervals (IRIs) are uncorrelated then onecan calculate the mean, s, and variance, σ s 2 = s 2 s 2 , in the postsynaptic gating variable (and hence thepostsynaptic conductance, which is proportional to s) via:

s = s + e T / τ s ,
(19a)
s + = s (1 α ˜ )+ α ˜ ,
(19b)
s= τ s T s + ( 1 e T / τ s ) = τ s T ( s + s ) ,
(19c)

where the averages of e T / τ s and T are taken over the distribution of interrelease intervals,P(T), as given in the prior section (Fig. 1, Eqs. (3), (18)) and we have used the solution s(t)= s + e ( t t i ) / τ s at time t following the i th spike at time t i . Solution of the above equations leads to

s= α ˜ τ s T ( 1 e T / τ s ) [ 1 ( 1 α ˜ ) e T / τ s ] ,
(20)

which allows us to calculate the mean synaptic conductance through static and dynamicsynapses (Fig. 3a).

Fig. 3
figure 3

a Mean synaptic transmission, s and b variance of synaptic transmission, s 2 s 2 , arising from presynaptic Poisson trains throughprobabilistic synapses. c Variance in synaptic transmission as a function ofthe mean transmission. Solid curves are analytic solutions (blue, middlecurve for static, a= Eq. (23), b= Eq. (24), c= Eq. (25);green, upper curve for facilitating, a from Eqs. (20) and (31),b from Eqs. (22) and (31), c from a &b; andred, lower curve for depressing synapses, a= Eq. (26), b fromEqs. (4), (22), (25) and (27), c from a &b). Blackdots are corresponding results from simulations produced by 30,000 sec ofPoisson input spike trains through saturating synapses

Similarly, combining

( s ) 2 = ( s + ) 2 e 2 T / τ s ,
(21a)
( s + ) 2 = ( s ) 2 ( 1 α ˜ ) 2 +2 α ˜ (1 α ˜ ) s + α ˜ 2 ,
(21b)
s 2 = τ s 2 T ( s + ) 2 ( 1 e 2 T / τ s ) ,
(21c)

leads to

s 2 = τ s α ˜ 2 2 T ( 1 e 2 T / τ s ) [ ( 1 α ˜ ) e T / τ s + 1 ] [ 1 ( 1 α ˜ ) 2 e 2 T / τ s ] [ 1 ( 1 α ˜ ) e T / τ s ] ,
(22)

which allows us to calculate the variance in postsynaptic conductance (Fig. 3b).

When synapses are static, release times are distributed as a Poisson process of rater p 0 , where r is the presynaptic Poisson rate and p 0 is the static release probability. In this case, the meanvalue of the gating variable is calculated by standard methods [20] to give

s Static = α ˜ P rel r τ s 1 + α ˜ P rel r τ s = α ˜ p 0 r τ s 1 + α ˜ p 0 r τ s ,
(23)

a function plotted in Fig. 3a (blue curve), where it exactlymatches the simulated data (black asterisks). A similar calculation leads to the variancein synaptic transmission for static synapses [12, 20] as

σ s , Static 2 = s Static 2 s Static 2 = α ˜ 2 p 0 r τ s [ 1 + ( 2 α ˜ ) p 0 r τ s ] ( 1 + α ˜ p 0 r τ s ) [ 2 + α ˜ p 0 r τ s ( 2 α ˜ ) ] s Static 2 ,
(24)

a function plotted in Fig. 3b (blue curve), where it exactlymatches the simulated data (black asterisks). The variance can be written as a function ofthe mean synaptic transmission by substituting for r into Eq. (24) with s Static from Eq. (23) to produce the reduced formula:

σ s , Static 2 = α ˜ s Static 2 α ˜ s Static ( 1 s Static ) 2 ,
(25)

which is plotted in Fig. 3c (blue curve).

For probabilistic depressing synapses with “all-or-none” release, the IRIsare independent as the synapse is always in the same state immediately post-release. TheIRIs are distributed according to Eq. (3), which leads to

e T / τ s = r τ s 2 ( τ s + τ D ) ( 1 + r τ s )
(26)

so that using Eq. (4) for the mean IRI, T, we have

s Depress = α ˜ p 0 r τ s ( τ s + τ D + p 0 r τ s τ D ) ( 1 + p 0 r τ D ) ( τ s + τ D + p 0 r τ s τ D + α ˜ p 0 r τ s 2 ) ,
(27)

which, plotted as a red curve in Fig. 3a, precisely matchesthe simulated data (black circles). Similarly, making the substitution for probabilisticdepressing synapses:

e 2 T / τ s = r τ s 2 ( τ s + 2 τ D ) ( 2 + r τ s )
(28)

into Eq. (22), allows us to evaluate s Depress 2 as plotted in Fig. 3b (red solidcurve), where it precisely matches the simulated data (black points).

For probabilistic facilitating synapses, we use an approximate formula forP(T) to evaluate the expected value of the exponential decay e T / τ s —essentially a Laplace transform—since the fullformula is intractable for these purposes. We found after testing many formulas againstsimulated quantities that so long as we correctly included the facilitation factorimmediately after release as F R + (r) and the approximate release probability a long time afterrelease as F R (r), the principal requirement was to use a probability densityof IRIs, with the correct value for the mean IRI. For facilitating synapses we know themean IRI, T:

T= 1 p 0 F r .
(29)

We fulfilled these three requirements by grossly simplifying the actual decay of thefacilitation variable post-release, letting it switch between its immediate post-releasevalue of F R + (r) to its steady state value, F R (r), at a time, T into the IRI where T is chosen to produce the correct value ofT. That is, we approximated the probability distribution ofIRIs, P(T), as

P ( T T < T ) = r p 0 F R + ( r ) e r p 0 F R + ( r ) T , P ( T T T ) = r p 0 F R ( r ) e r p 0 F R ( r ) T ,
(30)

where

T = 1 r p 0 F R + ( r ) ln ( 1 / F 1 / F R + ( r ) 1 / F R ( r ) 1 / F R + ( r ) ) .
(31)

From such a distribution we can easily calculate moments, e T / τ s and e 2 T / τ s , of the postsynaptic conductance using the Laplace transformswhere

e T / τ s = r p 0 F R + ( r ) τ s 1 + r p 0 F R + ( r ) τ s + e T / τ s e r p 0 F R + ( r ) T × ( r p 0 F R ( r ) τ s 1 + r p 0 F R ( r ) τ s r p 0 F R + ( r ) τ s 1 + r p 0 F R + ( r ) τ s ) .
(32)

The corresponding mean postsynaptic conductance, using Eq. (20), plotted inFig. 3a (green curve) is indistinguishable from the simulateddata (black points). This form of the mean synaptic transmission through facilitatingsynapses will be used in the next section when we assess the stability and robustness ofmemory states produced by such synaptic feedback. The variance in synaptic transmission ofthe simulated data (Fig. 3b, black points) is no longerprecisely fit by the approximate formula, obtained from Eq. (22), Eq. (31) and using Eq.(31) with τ s replaced by τ s /2 to calculate e 2 T / τ s (Fig. 3b, green curve). However,since the approximate formula slightly overestimates the variance, it will tend tounderestimate the stability of any memory state. Thus, a more precise fit would enhancestability (Fig. 4d). Figure 3c(green curve) indicates that for all values of mean synaptic transmission, the variance isgreater when synapses are facilitating.

Fig. 4
figure 4

Synaptic facilitation enhances the stability of discrete memory states. a Thefiring rate curve (solid, blue) and synaptic feedback (dashed red)for a system with feedback strength optimized for bistability in a group of cells withstatic synapses. Firing rate curve follows Eq. (34) with β 1 =119.5, β 2 =0.615, β 3 =5.326, which is the best fit to the leaky-integrate and fireneuron used in the simulations and described in Table 2d.Feedback strength is optimized for bistability with W=1.84. b Same firing rate curve (solid, blue)as in a but synaptic feedback (dashed green) via facilitating synapseswith feedback strength optimized for bistability with W=2.10. ab Solid circles indicate stablefixed points separated by an unstable fixed point (open circle). cDifference between firing rate and feedback curves in a and b determinethe basis for the gradient of an effective potential. Note the enhanced areas betweenfixed points (zero crossings) producing a larger potential barrier whensynapses are facilitating (green) compared to static (red). dThe lifetime of both the low activity state and the high activity state increasesexponentially with system size, but a given level of stability is achieved with farfewer cells when the synapses are facilitating (solid curves, analyticresults; filled and open circles simulated results for the high and lowactivity states, respectively)

3 Stability of Discrete States Enhanced by Short-Term Synaptic Facilitation

Groups of cells with sufficient recurrent excitatory feedback can become bistable, capableof remaining, in the absence of input, in a quiescent state of low-firing rate, or aftertransient excitation, in a persistent state of high-firing rate. Given the inherentstochastic noise in neural activity—spike trains are irregular, with the CV of ISIsoften exceeding one—the activity states have an inherent average lifetime, whichincreases exponentially with the number of neurons in the cell-group. In this section, weshow analytically that addition of synaptic facilitation to all recurrent synapses canincrease the stability of such discrete memory states by many orders of magnitude. We followthe methods presented in a prior paper for static synapses [12] and extend them to a circuit with probabilistic facilitating synapses.Calculations of stability are based on the mean of first-passage times between two stablestates [21]. We assume that neurons spike with Poisson statistics, while the variability inthe postsynaptic conductance, which possesses a long time constant (100 ms) typical ofNMDA receptors [15], determines the instability of states. Since synaptic facilitation ofprobabilistic synapses affects both the mean and variance of the postsynaptic conductance(Figs. 3a–3b), both must becalculated and taken into account when determining the lifetime of memory states. Wedescribe the method briefly below, leaving a reproduction of the full details to thefollowing sections.

Bistability arises when the deterministic dynamics of the network produces multiple fixedpoints—firing rates at which dr/dt=0—at least two of which are stable. The deterministic meanfiring rate depends on the total synaptic input to a group of cells. The total synapticinput includes a feedback component via recurrent connections as well as an independentexternal component. At a fixed point, the feedback produced by a given firing rate is suchthat the total synaptic input exactly maintains that given firing rate (intersections inFigs. 4a, 4b). For a network to possessmultiple fixed points, the curve representing synaptic transmission as a function of firingrate and the curve representing firing rate as a function of synaptic input must intersectat multiple points (Figs. 4a, 4b). Betweenany two stable fixed points is an unstable fixed point, where the curves cross back in theopposite direction. The stability of any individual fixed point is strongly dependent on thearea enclosed between the two curves from that fixed point to the unstable fixed point. Thisenclosed area acts as the height of an effective potential (Fig. 4c), which, for a given level of noise in the system determines the mean passagetime from one stable fixed point to the basin of attraction of the other fixed point, i.e.,the mean lifetime of the memory state. Importantly, the lifetime is approximatelyexponentially dependent on the effective barrier height, or the area between the two curves.Thus, changes in the curvature of synaptic feedback as a function of firing rate, which canhave a strong impact on the area between the f-I curve and the feedback curve, can affectstate lifetimes exponentially.

When we analyze the extent of this effect as wrought by synaptic facilitation, we find agreatly enhanced barrier in the effective potential (Fig. 4c),which demonstrates the additional curvature in the neural feedback function outweighs anyincrease in noise in the system (which enters the denominator in the effective potential,Eq. (35). Consequently, the lifetime of both persistent and spontaneous states in a discreteattractor system, can be enhanced by several orders of magnitude when synapses arefacilitating (Fig. 4d). Alternatively, one can obtain the samenecessary stability with far fewer cells, for example, to produce a mean stable lifetime ofover a minute for both the low and high activity states, with all-to-all connections, onlyeight cells are necessary in the example with facilitating synapses, whereas forty arenecessary when synapses are static.

3.1 Analytic Calculation of Mean Transition Time Between Discrete Attractor States

To calculate transition times between discrete attractor states, and hence assess theirstability to noise, we produce an effective potential for the postsynaptic conductance asthe most slowly varying continuous variable of relevance. We use standard methods fortransitions between stable states of Markov processes [21] but first must calculate the deterministic term, A(s), and diffusive term, D(s), for a group of cells with recurrent feedback. Thecalculations in the case of static synapses were produced and validated elsewhere [12] but we briefly reiterate them in the following paragraphs. When synapses arefacilitating, the only alterations are the expression for mean synaptic conductance,s(r) (Fig. 3a) and its variance, σ s ( r ) 2 (Fig. 3b), and a newly optimizedstrength of feedback connection to ensure both spontaneous and active states remain asstable as possible.

Our essential assumption is to treat the behavior of the postsynaptic variable,s, given a presynaptic Poisson spike train at rate r, as anOrnstein–Uhlenbeck process, which matches the mean and variance of s, whilemaintaining the same basic synaptic time constant for decay to zero in the absence ofpresynaptic input. Thus, we have

A(s)= s τ s + α ˜ T α ˜ s τ s e T / τ s ( 1 e T / τ s ) ,
(33)

(by matching the mean of s) and

D 1 (s)=2 σ s 2 d A ( s ) d s
(34)

(by matching the variance of s) where the subscript “1” indicatesthe variance produced by a single presynaptic spike train. For a circuit with Npresynaptic neurons producing feedback current, we scale down individual connectionsstrengths so that the mean feedback current is independent of N, but the noise isreduced as D N (s)= D 1 (s)/N, since s is the fraction of maximal conductance(0s1).

We close the feedback loop by ensuring the presynaptic firing rate is equal to thepostsynaptic firing rate, so use the firing rate function [22]:

r=f(S)= β 1 ( S β 2 ) 1 exp [ β 3 ( S β 2 ) ] ,
(35)

with rate multiplier β 1 =115, threshold β 2 =0.571, and concavity β 3 =5.66 all obtained by fitting to leaky integrate-and-firesimulations [12]. S is a scaled version of s, accounting for the totalfeedback conductance, S=Ws, where W is the sum of connection strengths of allcells and held fixed when N is varied.

The effective potential, Φ(s), for a group with N feedback inputs per cell is

Φ(s)=2N 0 s A ( s ) D 1 ( s ) d s ,
(36)

which leads to a probability density, P(s):

P(s)= 2 N C D 1 ( s ) exp [ Φ ( s ) ] ,
(37)

where C is a normalization constant. The mean transition time from a stablestate centered at s 1 to a state centered at s 2 > s 1 is [21]:

T trans ( s 1 , s 2 )= C 2 s 1 s 2 dsexp [ Φ ( s ) ] 0 s P ( s ) d s ,
(38)

a function which is plotted for both static and facilitating synapses in Fig. 4d.

3.2 Simulation of Mean Transition Time Between Discrete Attractor States

We compared the results of our approximate analysis (Fig. 4d,curves) with those of computer simulations of noisy leaky-integrate and fire neurons. Todo this, we simulated small circuits of excitatory neurons connected in an all-to-allmanner, using the parameters given in Table 2. Each neuronreceived independent background Poisson inputs, both excitatory and inhibitory, such thatinterspike intervals had a CV of 1 at low firing rates, decreasing gradually to 0.8 by afiring rate of 100 Hz. We simulated for either 200,000 seconds, or until 20,000transitions between states were made, whichever was sooner. The mean transition times areplotted in Fig. 4d (open and closed circles), where they showgood qualitative agreement with the analytic curves.

Table 2 Details of network simulations producing memory activity

3.3 Results for Multiple Circuits

In the example shown, bistability in the control system with static synapses requiredparticular fine-tuning of parameters, so was not very robust. One could wonder that if adifferent system were chosen—in particular a different f-I curve wereused—then the system with static synapses might not be improved by the addition ofsynaptic facilitation. That is, should synaptic facilitation always enhance robustness ofsuch bistable neural circuits? To address this point, we parametrically varied theproperties of the f-I curve (Eq. (34)) and for each set of parameters,{ β 1 , β 2 , β 3 } we systematically varied the feedback connection strength,W, to test whether the system could be bistable.

As a result (Fig. 5), we found that the set of parameters{ β 1 , β 2 , β 3 } able to produce bistability when synapses are static is asubset of the set found when synapses are facilitating. Thus, synaptic facilitation canproduce bistability when it is not possible with static synapses, but the reverse is nottrue. As a corollary, the set of parameters { β 1 , β 2 , β 3 } able to produce bistability when synapses are depressing is asubset of the set found when synapses are static.

Fig. 5
figure 5

Range of bistability is enhanced with facilitating synapses and reduced withdepressing synapses. a Low threshold, with β 2 =0.301. b High threshold with β 2 =1.001. a, b White region, all models (with static, facilitating and depressing synapses)are bistable; yellow region, models with static or facilitating synapses arebistable; orange region, only models with facilitating synapses are bistable;black region, no models are bistable

For all parameter sets { β 1 , β 2 , β 3 } able to produce bistability, we assessed the optimalstability of the memory system. As the excitatory feedback connection strength,W, increases, so the mean lifetime of the high-activity state increases, whilethe mean lifetime of the low-activity state decreases. We consider optimal stability ofthe memory state as the value of the lifetime when high-activity and low-activity statesare equally durable. More specifically, we calculate the minimum of T trans ( s 1 , s 2 ) and T trans ( s 2 , s 1 ) as a measure of the stability of memory and parametricallyvary W to find the maximum stability for a given set of { β 1 , β 2 , β 3 } and given type of synapses. In all cases where comparison waspossible, stability is enhanced when synapses are facilitating and stability is reducedwhen synapses are depressing, compared to the case of static synapses (Fig. 6).

Fig. 6
figure 6

Maximum stability of memory states, for a given neural firing-rate curve, is alwaysgreater when synapses are facilitating rather than static. ab Lowthreshold, β 2 =0.451. cd Medium threshold, β 2 =0.701. ef High threshold, β 2 =0.951. a, c, e Synapses are static.b, d, f Synapses are facilitating. All panels:Steepness of single neuron firing-rate curves increase with β 1 (y-axis) while maximum curvature increases with β 3 (x-axis). Stability of a bistable system isdetermined by the minimum lifetime of either of the two activity states. Maximumstability is calculated for each firing-rate curve as a function of connectionstrength and plotted after logarithmic scaling in color code. Darkblue: no bistability exists. Light blue = low stability;orange-red = high stability; cyan-green boundary = optimallifetime of one hour

It is worth emphasizing that the two effects of synaptic facilitation on synaptictransmission have opposing consequences for attractor state stability. While the increasedcurvature in the curve of mean synaptic transmission increases stability of discreteattractors, the increased variance (Fig. 3c, green curve)decreases stability. While our results demonstrate that the deterministic effect dominates(i.e., the net effect of facilitation is to enhance stability), it is instructive toassess the contribution of each of the two effects alone. Thus, for a given mean synaptictransmission calculated for facilitating synapses, we used the variance in synaptictransmission corresponding to static synapses (Fig. 3c, bluecurve) and recalculated the lifetimes of memory states. While changing the noise does notchange significantly the parameter range for bistability (i.e., Fig. 5 is, to first order, unaffected by changes in noise) it does have aconsiderable impact on the lifetimes of states. In particular, by using the reduced noiseof static synapses—a reduction of at most 20 %—the optimal lifetimewas typically a factor of e higher in a circuit with 20 neurons and e 2 higher in a circuit with 40 neurons (using the parameters ofFig. 4d). Figure 7 demonstratesthe enhanced lifetime in the hybrid model across networks—the ratio is alwaysgreater than one and extended to as high as 50 in the networks examined. Thus, theincreased noise in the postsynaptic current produced by synaptic facilitation does produceconsiderable destabilization of state lifetimes—the hybrid model of synapticfacilitation without such enhanced noise produces the greatest possible stability ofdiscrete memory states.

Fig. 7
figure 7

A hybrid model demonstrates the reduction in lifetime attributable to the enhancedfluctuations in postsynaptic conductance produced by synaptic facilitation.ab Low threshold, β 2 =0.451. cd Medium threshold, β 2 =0.701. ef High threshold, β 2 =0.951. a, c, e Lifetime of states in thehybrid model with synaptic facilitation but with the noise due to static synapses.Dark blue: no bistability exists. Light blue = low stability;orange-red = high stability; cyan-green boundary = optimallifetime of one hour. b, d, f Logarithm of the ratio ofFigs. 7a, 7c, 7e to 6b, 6d, 6f, respectively, demonstrates the decrease in state lifetime attributableto enhanced noise when synapses are facilitating. Dark blue: ratio = 1.Cyan-yellow, ratio >e. Orange-light red, ratio> e 2 . Dark red, ratio > e 3 . All panels: Steepness of single neuronfiring-rate curves increase with β 1 (y-axis) while maximum curvature increases with β 3 (x-axis). Stability of a bistable system isdetermined by the minimum lifetime of either of the two activity states. Maximumstability is calculated for each firing-rate curve as a function of connectionstrength and plotted after logarithmic scaling in color code

4 Discussion

Bistability relies upon positive feedback, which can arise from cell-intrinsic currents orfrom network feedback. Synaptic facilitation is a positive feedback mechanism in circuits ofreciprocally connected excitatory cells, since the greater the mean firing rate, the greaterthe effective connection strength, further amplifying the excitatory input beyond thatproduced by the increased spike rate alone. This property of synaptic facilitation enhancesthe stability of memory states and renders them more robust to distractors [23]. Other forms of positive feedback, such as depolarization-induced suppression ofinhibition (DSI), which depends on activity in the postsynaptic cell, can similarly producerobustness in recurrent memory networks [24].

When the bistability necessary for discrete memory is produced through synaptic feedback ina circuit of neurons, the relative stability to noise fluctuations of each of the two stablefixed points depends exponentially on the area between the mean neural response curve andthe synaptic feedback curve (Figs. 4a–4b). While the synaptic feedback curve is monotonic in firing rate, for staticsynapses it is either linear (in the absence of postsynaptic saturation) or of negativecurvature (decreasing gradient), with the effectiveness of additional spikes decreasing athigh rates when receptors become saturated. However, when the synapse is facilitating, thesynaptic response curve has positive curvature when firing rates are low—the effect ofeach additional spike is greater as firing rate increases. Here, we showed how such aneffect could increase the area between intersections of synaptic feedback and neuralresponse curves, enhancing stability dramatically (Figs. 46).

We note that the addition of positive curvature at low rates to the negative curvature athigh, saturating rates in the curve of synaptic transmission as a function of presynapticfiring rates (Fig. 3a) inevitably increases the areas betweenthree points of intersection with any firing rate curve without such an“S”-shape (Figs. 4a–4b).Since the “S”-shape is a hallmark of synaptic facilitation, not present forsynaptic transmission through static synapses, facilitation can always enhance stability ofsuch bistable systems. Less mathematically, a facilitating synapse with the same effectivestrength as a static synapse at intermediate firing rates is stronger at high firing rates,enhancing the stability of a high-activity state (where a drop in synaptic transmission isdetrimental), while at the same time is weaker at low firing rates, enhancing the stabilityof a low-activity state (where a rise of synaptic transmission is detrimental).

It is worth pointing out the converse—that short-term synaptic depression reduces therobustness of such discrete attractors. Indeed, in Fig. 5, weshow that the range of parameters for which a bistable system exists is much narrower whensynapses are depressing (D) versus static (S) or facilitating (F). Since synaptic depressioncontributes a negative curvature to the f-I curve, it tends to reduce the“S-shape” needed for bistability. Or, perhaps more intuitively, high synapticstrength is needed to maintain a high-firing rate state if synapses are depressing, but suchhigh synaptic strength is more likely to render the low-firing rate spontaneous stateunstable.

The changes in the shape of the distribution of inter-release intervals caused by dynamicsynapses alter the fluctuations in post-synaptic conductance. In particular, facilitationenhances the variability and depression reduces the variability arising from a Poisson spiketrain. While the extra variability caused by facilitating synapses tends to destabilize amemory system, this effect was overwhelmed by the increase in stability due to therate-dependent changes in mean synaptic transmission described above. However, the increasein conductance variability, in particular, being on a slower timescale than membranepotential fluctuations, can be a factor in explaining the high CV of neural spiketrains.

Our calculations are based on a simplified formalism, in which the firing-rate curve (f-Icurve) of a neuron is first assumed or fit (Eq. (34), [22]) under in vivo-like conditions, assuming a given level of noise in the membranepotential. Since the shape of the f-I curve depends on both the mean and variance of theinput current [25, 26], it might appear invalid to discuss changes in the variability of input currentdue to dynamic synapses in the context of a fixed f-I curve. However, the time constants forshort-term synaptic plasticity and the NMDA receptor-mediated currents are more than anorder of magnitude greater than the time constant of the membrane potential under theconditions of strong, fluctuating balanced input that produce the irregularity of spiketrains seen in vivo. Since the neuron’s membrane potential can sample its probabilitydistribution—which determines the likelihood of a spike per unit time—morerapidly than the timescale for changes in that probability distribution, our analyticmethods provide a reasonable description of the circuit’s behavior (Fig. 4d).

In summary, we have demonstrated the ability of short-term synaptic facilitation tostabilize discrete attractor states of neural activity to noise. We have shown this bysimulations and through analytic methods, which include a consideration of how stochasticdynamic synapses mold the distribution of interrelease intervals (IRIs) into a form thatdiffers from the exponential distribution of incoming interspike intervals (ISIs). Thealtered IRI distribution affects both mean synaptic transmission and the variability oftransmission due to a presynaptic Poisson spike train—both of which have a strongimpact on the stability of memory states. The increased variability of synaptic transmissiondue to facilitation is more than countered by the effect of facilitation on mean synaptictransmission, which enhances the robustness of bistability, leading to stable memory stateswith fewer neurons.

References

  1. Lorente de Nó R: Vestibulo-ocular reflex arc. Arch Neurol Psych 1933, 30: 245–291. 10.1001/archneurpsyc.1933.02240140009001

    Article  Google Scholar 

  2. Funahashi S, Bruce CJ, Goldman-Rakic PS: Neuronal activity related to saccadic eye movements in the monkey’s dorsolateralprefrontal cortex. J Neurophysiol 1991, 65: 1464–1483.

    Google Scholar 

  3. Hebb DO: Organization of Behavior. Wiley, New York; 1949.

    Google Scholar 

  4. Brunel N, Nadal JP: Modeling memory: what do we learn from attractor neural networks? C R Acad Sci, Sér 3 Sci Vie 1998, 321: 249–252.

    Article  Google Scholar 

  5. Hopfield JJ: Neural networks and physical systems with emergent collective computationalabilities. Proc Natl Acad Sci USA 1982, 79: 2554–2558. 10.1073/pnas.79.8.2554

    Article  MathSciNet  Google Scholar 

  6. Hopfield JJ: Neurons with graded response have collective computational properties like those oftwo-state neurons. Proc Natl Acad Sci USA 1984, 81: 3088–3092. 10.1073/pnas.81.10.3088

    Article  Google Scholar 

  7. Zhang K: Representation of spatial orientation by the intrinsic dynamics of the head-directioncell ensembles: a theory. J Neurosci 1996, 16: 2112–2126.

    Google Scholar 

  8. Compte A, Brunel N, Goldman-Rakic PS, Wang XJ: Synaptic mechanisms and network dynamics underlying spatial working memory in acortical network model. Cereb Cortex 2000, 10: 910–923. 10.1093/cercor/10.9.910

    Article  Google Scholar 

  9. Bourjaily MA, Miller P: Dynamic afferent synapses to decision-making networks improve performance in tasksrequiring stimulus associations and discriminations. J Neurophysiol 2012, 108: 513–527. 10.1152/jn.00806.2011

    Article  Google Scholar 

  10. Lindner B, Gangloff D, Longtin A, Lewis JE: Broadband coding with dynamic synapses. J Neurosci 2009, 29: 2076–2088. 10.1523/JNEUROSCI.3702-08.2009

    Article  Google Scholar 

  11. Rotman Z, Deng PY, Klyachko VA: Short-term plasticity optimizes synaptic information transmission. J Neurosci 2011, 31: 14800–14809. 10.1523/JNEUROSCI.3231-11.2011

    Article  Google Scholar 

  12. Miller P, Wang XJ: Stability of discrete memory states to stochastic fluctuations in neuronal systems. Chaos 2006., 16: Article ID 026110 Article ID 026110

    Google Scholar 

  13. Koulakov AA: Properties of synaptic transmission and the global stability of delayed activitystates. Network 2001, 12: 47–74.

    Article  MATH  Google Scholar 

  14. Wang XJ: Synaptic basis of cortical persistent activity: the importance of NMDA receptors toworking memory. J Neurosci 1999, 19: 9587–9603.

    Google Scholar 

  15. Wang XJ: Synaptic reverberation underlying mnemonic persistent activity. Trends Neurosci 2001, 24: 455–463. 10.1016/S0166-2236(00)01868-3

    Article  MATH  Google Scholar 

  16. Miller P, Zhabotinsky AM, Lisman JE, Wang XJ: The stability of a stochastic CaMKII switch: dependence on the number of enzymemolecules and protein turnover. PLoS Biol 2005., 3: Article ID e107 Article ID e107

    Google Scholar 

  17. Dayan P, Abbott LF: Theoretical Neuroscience. MIT Press, Cambridge; 2001.

    MATH  Google Scholar 

  18. Rosenbaum R, Rubin J, Doiron B: Short term synaptic depression imposes a frequency dependent filter on synapticinformation transfer. PLoS Comput Biol 2012., 8: Article ID e1002557 Article ID e1002557

    Google Scholar 

  19. Kandaswamy U, Deng PY, Stevens CF, Klyachko VA: The role of presynaptic dynamics in processing of natural spike trains in hippocampalsynapses. J Neurosci 2010, 30: 15904–15914. 10.1523/JNEUROSCI.4050-10.2010

    Article  Google Scholar 

  20. Brunel N, Wang XJ: Effects of neuromodulation in a cortical network model of object working memorydominated by recurrent inhibition. J Comput Neurosci 2001, 11: 63–85. 10.1023/A:1011204814320

    Article  Google Scholar 

  21. Gillespie DT: Markov Processes. Academic Press, San Diego; 1992.

    MATH  Google Scholar 

  22. Abbott LF, Chance FS: Drivers and modulators from push-pull and balanced synaptic input. Prog Brain Res 2005, 149: 147–155.

    Article  Google Scholar 

  23. Itskov V, Hansel D, Tsodyks M: Short-term facilitation may stabilize parametric working memory trace. Front Comput Neurosci 2011., 5: Article ID 40 Article ID 40

    Google Scholar 

  24. Carter E, Wang XJ: Cannabinoid-mediated disinhibition and working memory: dynamical interplay of multiplefeedback mechanisms in a continuous attractor model of prefrontal cortex. Cereb Cortex 2007, 17(Suppl 1):i16-i26. 10.1093/cercor/bhm103

    Article  Google Scholar 

  25. Hansel D, van Vreeswijk C: How noise contributes to contrast invariance of orientation tuning in cat visualcortex. J Neurosci 2002, 22: 5118–5128.

    Google Scholar 

  26. Murphy BK, Miller KD: Multiplicative gain changes are induced by excitation or inhibition alone. J Neurosci 2003, 23: 10040–10051.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Miller.

Additional information

Competing Interests

The author declares that he has no competing interest.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Miller, P. Stabilization of Memory States by Stochastic Facilitating Synapses. J. Math. Neurosc. 3, 19 (2013). https://doi.org/10.1186/2190-8567-3-19

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-3-19

Keywords