- Open Access
Stabilization of Memory States by Stochastic Facilitating Synapses
The Journal of Mathematical Neuroscience volume 3, Article number: 19 (2013)
Bistability within a small neural circuit can arise through an appropriate strength ofexcitatory recurrent feedback. The stability of a state of neural activity, measured bythe mean dwelling time before a noise-induced transition to another state, depends on theneural firing-rate curves, the net strength of excitatory feedback, the statistics ofspike times, and increases exponentially with the number of equivalent neurons in thecircuit. Here, we show that such stability is greatly enhanced by synaptic facilitationand reduced by synaptic depression. We take into account the alteration in times ofsynaptic vesicle release, by calculating distributions of inter-release intervals of asynapse, which differ from the distribution of its incoming interspike intervals when thesynapse is dynamic. In particular, release intervals produced by a Poisson spike trainhave a coefficient of variation greater than one when synapses are probabilistic andfacilitating, whereas the coefficient of variation is less than one when synapses aredepressing. However, in spite of the increased variability in postsynaptic input producedby facilitating synapses, their dominant effect is reduced synaptic efficacy at low inputrates compared to high rates, which increases the curvature of neural input-outputfunctions, leading to wider regions of bistability in parameter space and enhancedlifetimes of memory states. Our results are based on analytic methods with approximateformulae and bolstered by simulations of both Poisson processes and of circuits of noisyspiking model neurons.
Circuits of reciprocally connected neurons have been long considered as a basis for themaintenance of persistent activity . Such persistent neuronal firing that continues for many seconds after atransient input can represent a short-term memory of prior stimuli . Indeed, Hebb’s famous postulate  that causally correlated firing of connected neurons could lead to astrengthening of the connection, was based on the suggestion that the correlated firingwould be maintained in a recurrently connected cell assembly beyond the time of a transientstimulus . Since then, analytic and computational models have demonstrated the ability ofsuch recurrent networks to produce multiple discrete attractor states , as in Hopfield networks [5, 6], or to be capable of integration over time via a marginally stable network, oftentermed a line attractor [7, 8]. Much of the work on these systems has assumed either static synapses, orconsidered changes in synaptic strength via long-term plasticity occurring on a much slowertimescale than the dynamics of neuronal responses. Here, we add some new results pertainingto the less well-studied effects of short-term plasticity—changes in synaptic strengththat arise on a timescale of seconds, the same timescale as that of persistentactivity—within recurrent discrete attractor networks.
The two forms of short-term synaptic plasticity—facilitation anddepression—affect all synapses of the presynaptic cell according to its train ofaction potentials. Synaptic facilitation refers to a temporary enhancement of synapticefficacy in the few hundreds of milliseconds following each spike, effectively strengtheningconnections to postsynaptic cells as presynaptic firing rate increases. Synaptic depressionis the opposite effect—reduced synaptic efficacy in the few hundreds of millisecondsfollowing a presynaptic spike, effectively weakening connection strengths as presynapticfiring rate increases. The dynamics of these processes (Table 1)also impacts the variability in postsynaptic conductance, in particular when synaptictransmission is treated as a stochastic event. The variability affects informationprocessing via the signal-to-noise ratio [9–11] and also determines the stability, or robustness, of discrete memory states [12, 13].
When analyzing the stability of discrete states, we focus on the mean value of andfluctuations within the postsynaptic feedback conductance, since that is the variable with aslow enough time constant to maintain persistent activity in standard models ofnetwork-produced memory states [14, 15]. In our formalism, we rely on fluctuations in this NMDA receptor-mediatedfeedback conductance to be on a slower timescale (100 ms) than the membrane timeconstant, which is short (<10 ms), in part because each cell receives a barrage ofbalanced excitatory and inhibitory inputs. When synapses are dynamic, both the meanpostsynaptic conductance and its fluctuations are altered from the case of static synapses.
Here, we show how a presynaptic Poisson spike train, which produces an exponentialdistribution of interspike intervals (ISIs), produces a distribution of inter-releaseintervals (IRIs) that is not exponential if synapses are either facilitating or depressing.We then consider how the nonexponential distribution of IRIs affects both the mean andstandard deviation of the postsynaptic conductance differently from the exponential,Poisson, distribution of IRIs. These results affect the calculation of stability of memorystates, yielding differences in the parameter ranges where bistability exists and producinglarge changes in the spontaneous transition times between states, which limit theirstability.
A two-state memory system is limited by the lifetime of the less stable state . For a given system, one can typically vary any parameter so as to enhance thelifetime of one state while reducing the lifetime of the other state. If we define thesystem’s stability as the lifetime of the less stable state, then the optimalstability of a system arises when the lifetimes of the two states are equal. In this paper,for a given system, defined by the neural firing-rate curve and type of synapse, weparametrically scale the total feedback connection strength to determine the system’soptimal stability. In so doing, we find that optimal stability of bistable neural circuitsis enhanced by synaptic facilitation.
2 Statistics of Synaptic Transmission Through Probabilistic Dynamic Synapses
In the following, we assume that synaptic facilitation and depression operate by modifyingthe release probability of presynaptic vesicles. Following vesicle release, neurotransmitterbinds to receptors in the postsynaptic terminal. The fraction of receptors bound at any onetime determines the fraction of open channels, known as the gating variable, s,which is proportional to the conductance producing current flow into the postsynaptic cell.The dependence of s on presynaptic firing is affected by the dynamic properties ofthe intervening synapse. In particular, the distribution of intervals between vesiclerelease events is not identical to the interspike interval (ISI) distribution: facilitatingsynapses increase the likelihood of short inter-release intervals (IRIs) compared to longintervals, so increase the coefficient of variation (CV); whereas depressing synapses makeshort release intervals unlikely and produce a more regular sequence of release intervals,reducing the CV (Fig. 1). While the means of these distributionscan be calculated by standard methods , it is valuable to know the full distribution, since changes in the CV of IRIsaffect the variability of the postsynaptic conductance, and thus alter properties likesignal-to-noise ratio and the stability of memory states to noise fluctuations.
2.1 Distribution of Release Times for a Poisson Spike Train Through StochasticDepressing Synapses
The distribution of release times of a vesicle for depressing synapses with a singlerelease site is simpler to calculate than that for facilitating synapses, because whenconsidering synaptic depression alone, the probability of release from a single sitesimply depends on the time since last release of a vesicle from that site. Therefore, wewill solve for depressing synapses before moving to the case of facilitating synapses,where the probability of release depends on the number of intervening spikes. Thesubsequent result for facilitating synapses will prove to be more biologically relevant,as synapses typically contain multiple releasable vesicles, so it is only in the casewhere the baseline release probability is low—in which case facilitationdominates—that failure of release is common enough to affect the distribution ofrelease times. The case of probabilistic release in depressing synapses with multiplerelease sites is more complex, though the first two moments of the IRI probabilitydistribution have been calculated by others .
Synaptic depression arises because of the time needed to recycle and replenish vesiclesfollowing release of neurotransmitter. Synaptic depression can be treated stochastically  by assuming vesicle recovery is a Poisson process, with the likelihood of avesicle being release-ready, or “docked,” as , where T is the time since the prior vesiclerelease. Thus, the distribution of inter-release intervals (IRIs) can be calculated byrequiring that a vesicle be docked within the interval and then adding the time for aspike to appear after the vesicle is docked. We assume a docked vesicle has a releaseprobability of and incoming spikes arrive as a Poisson process of rater. Since probability of docking between time and is evaluated at , or
and the probability of the first spike after time being at time T and causing release as, we have
which leads to a mean IRI of
The reduction in probability of small IRIs is a simple example of the temporal filteringof information presented by others . The addition of the extra probabilistic process of vesicle recovery, whichunderlies synaptic depression, causes IRIs to be more regular, as evidenced in Eqs.(2)–(3) by a reduced coefficient of variation (CV) of IRIs from the Poisson value of1:
2.2 Distribution of Release Times for a Poisson Spike Train Through StochasticFacilitating Synapses
For facilitating synapses, we take the following form for release probability,, where between presynaptic spikes:
Each spike produces an increase in F from (which determines the release probability of that spike) to (which is the new release probability for an immediate,subsequent spike) such that
where is the facilitation factor, taking a value between 0 and 1,indicating the fractional increase from the pre-spike release probability toward asaturating release probability of .
To calculate the distribution of interrelease intervals (IRIs) we need to calculate theprobability of release as a function of time, following a prior release. Althoughpresynaptic spikes arrive with constant probability per unit time in a Poisson process,vesicle release occurs more often when the facilitation variable is high. Thus,immediately after release, the likelihood of release is greater than on average, becausethe facilitation variable takes some time (on the order of ) to return to a baseline value. Furthermore, when calculatingthe IRI distribution, we must be aware that , which is the mean value approached by F conditionedon no intervening release event will be lower than the mean value, , since long IRIs are more associated with time windows offewer intervening presynaptic spikes than chance.
To proceed, we first calculate , the mean of the facilitation variable immediately aftervesicle release. To arrive at this quantity, we use the mean value of the facilitationvariable averaged across all presynaptic spikes :
and the variance of this quantity :
Together, these can be used to calculate , which is the mean value of the facilitation variable justprior to firing when averaged across only those spikes that actually cause release, sincerelease probability is proportional to . The latter averaging produces a higher value than, since higher instances of are more likely to result in release, so weight the averagemore than lower values:
where is the probability that F takes the value immediately prior to an incoming spike and the denominatornormalizes the distribution. Hence,
From this, the mean value of the facilitation variable immediately following vesiclerelease can be calculated as
The above formula is exact and was matched by simulated data at all values of rsimulated (data not shown).
To estimate the steady state value of F a long time from any priorrelease—a steady state that may never be reached if the product of firing rate andbase release probability is much higher than —we solve a self-consistency equation for this value, and ignore fluctuations by assuming release probability is for each presynaptic spike. One can calculate then theprobability of N spikes in a given interval, T, conditioned on therequirement that none of those spikes caused vesicle release, while the facilitationvariable is at its mean steady state value of . The result is:
which is the result for a Poisson process of modified rate, . This allows us to self-consistently calculate by using the result for the mean value of the facilitationvariable given such a modified spike rate, such that
which can be solved using the quadratic formula to give
a value which is always below and in close agreement with simulated data (not shown).
Finally, to fit the IRI distribution, we assumed exponential decay from to with a time constant such that the initial slope (when theprobability of any intervening spikes is zero) matches that of an exponential decay to 1with time constant (the initial rate of decrease of F in the absence ofintervening spikes). That is, we take the release to follow an inhomogeneous Poissonprocess with a rate, which depends on time, T, since the prior release event,given by
The distribution of IRIs is then given by 
a function, which is plotted in Figs. 1b1–1b2, where it is indistinguishable from the simulated data. Similarly,indistinguishable is the cumulative IRI distribution plotted in Figs. 2b1–2b2, justifying the approximations that led toour results.
Finally, it should be noted that when synapses are facilitating, consecutive IRIs arecorrelated. For example, when the presynaptic rate is 2 Hz in the simulation used toproduce Figs. 1b1 and 2b1, thecorrelation between one IRI and the subsequent one is 0.028, while with a presynaptic rateof 50 Hz the correlation is 0.015. Such a correlation, which cannot be obtained fromthe IRI distribution alone, further increases any variability in postsynaptic conductance,above and beyond the increase due to the altered shape of the IRI distribution.
In summary, the main difference produced by facilitation from the exponentialdistribution of inter-spike intervals (which is retrieved by setting either or to zero) is an enhancement of probability at low Tand a corresponding reduction at high T. These changes produce a CV of IRIsgreater than 1 ( at 5 Hz and at 50 Hz in the examples shown in Figs. 1b1–1b2) enhancing the noise in any neuralsystem.
2.3 Mean Synaptic Transmission via Dynamic Synapses
We assume that at the time of vesicle release the postsynaptic conductance increases in astep-wise manner, with a fraction, , of previously closed channels becoming opened. This causesthe synaptic gating variable, s, to increase from its prior value, to according to . It then decays between release events with time constant,, according to .
If one assumes that successive inter-release intervals (IRIs) are uncorrelated then onecan calculate the mean, , and variance, , in the postsynaptic gating variable (and hence thepostsynaptic conductance, which is proportional to s) via:
where the averages of and are taken over the distribution of interrelease intervals,, as given in the prior section (Fig. 1, Eqs. (3), (18)) and we have used the solution at time t following the i th spike at time. Solution of the above equations leads to
which allows us to calculate the mean synaptic conductance through static and dynamicsynapses (Fig. 3a).
which allows us to calculate the variance in postsynaptic conductance (Fig. 3b).
When synapses are static, release times are distributed as a Poisson process of rate, where r is the presynaptic Poisson rate and is the static release probability. In this case, the meanvalue of the gating variable is calculated by standard methods  to give
a function plotted in Fig. 3a (blue curve), where it exactlymatches the simulated data (black asterisks). A similar calculation leads to the variancein synaptic transmission for static synapses [12, 20] as
a function plotted in Fig. 3b (blue curve), where it exactlymatches the simulated data (black asterisks). The variance can be written as a function ofthe mean synaptic transmission by substituting for r into Eq. (24) with from Eq. (23) to produce the reduced formula:
which is plotted in Fig. 3c (blue curve).
For probabilistic depressing synapses with “all-or-none” release, the IRIsare independent as the synapse is always in the same state immediately post-release. TheIRIs are distributed according to Eq. (3), which leads to
so that using Eq. (4) for the mean IRI, , we have
which, plotted as a red curve in Fig. 3a, precisely matchesthe simulated data (black circles). Similarly, making the substitution for probabilisticdepressing synapses:
For probabilistic facilitating synapses, we use an approximate formula for to evaluate the expected value of the exponential decay—essentially a Laplace transform—since the fullformula is intractable for these purposes. We found after testing many formulas againstsimulated quantities that so long as we correctly included the facilitation factorimmediately after release as and the approximate release probability a long time afterrelease as , the principal requirement was to use a probability densityof IRIs, with the correct value for the mean IRI. For facilitating synapses we know themean IRI, :
We fulfilled these three requirements by grossly simplifying the actual decay of thefacilitation variable post-release, letting it switch between its immediate post-releasevalue of to its steady state value, , at a time, into the IRI where is chosen to produce the correct value of. That is, we approximated the probability distribution ofIRIs, , as
From such a distribution we can easily calculate moments, and , of the postsynaptic conductance using the Laplace transformswhere
The corresponding mean postsynaptic conductance, using Eq. (20), plotted inFig. 3a (green curve) is indistinguishable from the simulateddata (black points). This form of the mean synaptic transmission through facilitatingsynapses will be used in the next section when we assess the stability and robustness ofmemory states produced by such synaptic feedback. The variance in synaptic transmission ofthe simulated data (Fig. 3b, black points) is no longerprecisely fit by the approximate formula, obtained from Eq. (22), Eq. (31) and using Eq.(31) with replaced by to calculate (Fig. 3b, green curve). However,since the approximate formula slightly overestimates the variance, it will tend tounderestimate the stability of any memory state. Thus, a more precise fit would enhancestability (Fig. 4d). Figure 3c(green curve) indicates that for all values of mean synaptic transmission, the variance isgreater when synapses are facilitating.
3 Stability of Discrete States Enhanced by Short-Term Synaptic Facilitation
Groups of cells with sufficient recurrent excitatory feedback can become bistable, capableof remaining, in the absence of input, in a quiescent state of low-firing rate, or aftertransient excitation, in a persistent state of high-firing rate. Given the inherentstochastic noise in neural activity—spike trains are irregular, with the CV of ISIsoften exceeding one—the activity states have an inherent average lifetime, whichincreases exponentially with the number of neurons in the cell-group. In this section, weshow analytically that addition of synaptic facilitation to all recurrent synapses canincrease the stability of such discrete memory states by many orders of magnitude. We followthe methods presented in a prior paper for static synapses  and extend them to a circuit with probabilistic facilitating synapses.Calculations of stability are based on the mean of first-passage times between two stablestates . We assume that neurons spike with Poisson statistics, while the variability inthe postsynaptic conductance, which possesses a long time constant (100 ms) typical ofNMDA receptors , determines the instability of states. Since synaptic facilitation ofprobabilistic synapses affects both the mean and variance of the postsynaptic conductance(Figs. 3a–3b), both must becalculated and taken into account when determining the lifetime of memory states. Wedescribe the method briefly below, leaving a reproduction of the full details to thefollowing sections.
Bistability arises when the deterministic dynamics of the network produces multiple fixedpoints—firing rates at which —at least two of which are stable. The deterministic meanfiring rate depends on the total synaptic input to a group of cells. The total synapticinput includes a feedback component via recurrent connections as well as an independentexternal component. At a fixed point, the feedback produced by a given firing rate is suchthat the total synaptic input exactly maintains that given firing rate (intersections inFigs. 4a, 4b). For a network to possessmultiple fixed points, the curve representing synaptic transmission as a function of firingrate and the curve representing firing rate as a function of synaptic input must intersectat multiple points (Figs. 4a, 4b). Betweenany two stable fixed points is an unstable fixed point, where the curves cross back in theopposite direction. The stability of any individual fixed point is strongly dependent on thearea enclosed between the two curves from that fixed point to the unstable fixed point. Thisenclosed area acts as the height of an effective potential (Fig. 4c), which, for a given level of noise in the system determines the mean passagetime from one stable fixed point to the basin of attraction of the other fixed point, i.e.,the mean lifetime of the memory state. Importantly, the lifetime is approximatelyexponentially dependent on the effective barrier height, or the area between the two curves.Thus, changes in the curvature of synaptic feedback as a function of firing rate, which canhave a strong impact on the area between the f-I curve and the feedback curve, can affectstate lifetimes exponentially.
When we analyze the extent of this effect as wrought by synaptic facilitation, we find agreatly enhanced barrier in the effective potential (Fig. 4c),which demonstrates the additional curvature in the neural feedback function outweighs anyincrease in noise in the system (which enters the denominator in the effective potential,Eq. (35). Consequently, the lifetime of both persistent and spontaneous states in a discreteattractor system, can be enhanced by several orders of magnitude when synapses arefacilitating (Fig. 4d). Alternatively, one can obtain the samenecessary stability with far fewer cells, for example, to produce a mean stable lifetime ofover a minute for both the low and high activity states, with all-to-all connections, onlyeight cells are necessary in the example with facilitating synapses, whereas forty arenecessary when synapses are static.
3.1 Analytic Calculation of Mean Transition Time Between Discrete Attractor States
To calculate transition times between discrete attractor states, and hence assess theirstability to noise, we produce an effective potential for the postsynaptic conductance asthe most slowly varying continuous variable of relevance. We use standard methods fortransitions between stable states of Markov processes  but first must calculate the deterministic term, , and diffusive term, , for a group of cells with recurrent feedback. Thecalculations in the case of static synapses were produced and validated elsewhere  but we briefly reiterate them in the following paragraphs. When synapses arefacilitating, the only alterations are the expression for mean synaptic conductance, (Fig. 3a) and its variance, (Fig. 3b), and a newly optimizedstrength of feedback connection to ensure both spontaneous and active states remain asstable as possible.
Our essential assumption is to treat the behavior of the postsynaptic variable,s, given a presynaptic Poisson spike train at rate r, as anOrnstein–Uhlenbeck process, which matches the mean and variance of s, whilemaintaining the same basic synaptic time constant for decay to zero in the absence ofpresynaptic input. Thus, we have
(by matching the mean of s) and
(by matching the variance of s) where the subscript “1” indicatesthe variance produced by a single presynaptic spike train. For a circuit with Npresynaptic neurons producing feedback current, we scale down individual connectionsstrengths so that the mean feedback current is independent of N, but the noise isreduced as , since s is the fraction of maximal conductance().
We close the feedback loop by ensuring the presynaptic firing rate is equal to thepostsynaptic firing rate, so use the firing rate function :
with rate multiplier , threshold , and concavity all obtained by fitting to leaky integrate-and-firesimulations . S is a scaled version of s, accounting for the totalfeedback conductance, , where W is the sum of connection strengths of allcells and held fixed when N is varied.
The effective potential, , for a group with N feedback inputs per cell is
which leads to a probability density, :
where C is a normalization constant. The mean transition time from a stablestate centered at to a state centered at is :
a function which is plotted for both static and facilitating synapses in Fig. 4d.
3.2 Simulation of Mean Transition Time Between Discrete Attractor States
We compared the results of our approximate analysis (Fig. 4d,curves) with those of computer simulations of noisy leaky-integrate and fire neurons. Todo this, we simulated small circuits of excitatory neurons connected in an all-to-allmanner, using the parameters given in Table 2. Each neuronreceived independent background Poisson inputs, both excitatory and inhibitory, such thatinterspike intervals had a CV of 1 at low firing rates, decreasing gradually to 0.8 by afiring rate of 100 Hz. We simulated for either 200,000 seconds, or until 20,000transitions between states were made, whichever was sooner. The mean transition times areplotted in Fig. 4d (open and closed circles), where they showgood qualitative agreement with the analytic curves.
3.3 Results for Multiple Circuits
In the example shown, bistability in the control system with static synapses requiredparticular fine-tuning of parameters, so was not very robust. One could wonder that if adifferent system were chosen—in particular a different f-I curve wereused—then the system with static synapses might not be improved by the addition ofsynaptic facilitation. That is, should synaptic facilitation always enhance robustness ofsuch bistable neural circuits? To address this point, we parametrically varied theproperties of the f-I curve (Eq. (34)) and for each set of parameters, we systematically varied the feedback connection strength,W, to test whether the system could be bistable.
As a result (Fig. 5), we found that the set of parameters able to produce bistability when synapses are static is asubset of the set found when synapses are facilitating. Thus, synaptic facilitation canproduce bistability when it is not possible with static synapses, but the reverse is nottrue. As a corollary, the set of parameters able to produce bistability when synapses are depressing is asubset of the set found when synapses are static.
For all parameter sets able to produce bistability, we assessed the optimalstability of the memory system. As the excitatory feedback connection strength,W, increases, so the mean lifetime of the high-activity state increases, whilethe mean lifetime of the low-activity state decreases. We consider optimal stability ofthe memory state as the value of the lifetime when high-activity and low-activity statesare equally durable. More specifically, we calculate the minimum of and as a measure of the stability of memory and parametricallyvary W to find the maximum stability for a given set of and given type of synapses. In all cases where comparison waspossible, stability is enhanced when synapses are facilitating and stability is reducedwhen synapses are depressing, compared to the case of static synapses (Fig. 6).
It is worth emphasizing that the two effects of synaptic facilitation on synaptictransmission have opposing consequences for attractor state stability. While the increasedcurvature in the curve of mean synaptic transmission increases stability of discreteattractors, the increased variance (Fig. 3c, green curve)decreases stability. While our results demonstrate that the deterministic effect dominates(i.e., the net effect of facilitation is to enhance stability), it is instructive toassess the contribution of each of the two effects alone. Thus, for a given mean synaptictransmission calculated for facilitating synapses, we used the variance in synaptictransmission corresponding to static synapses (Fig. 3c, bluecurve) and recalculated the lifetimes of memory states. While changing the noise does notchange significantly the parameter range for bistability (i.e., Fig. 5 is, to first order, unaffected by changes in noise) it does have aconsiderable impact on the lifetimes of states. In particular, by using the reduced noiseof static synapses—a reduction of at most 20 %—the optimal lifetimewas typically a factor of e higher in a circuit with 20 neurons and higher in a circuit with 40 neurons (using the parameters ofFig. 4d). Figure 7 demonstratesthe enhanced lifetime in the hybrid model across networks—the ratio is alwaysgreater than one and extended to as high as 50 in the networks examined. Thus, theincreased noise in the postsynaptic current produced by synaptic facilitation does produceconsiderable destabilization of state lifetimes—the hybrid model of synapticfacilitation without such enhanced noise produces the greatest possible stability ofdiscrete memory states.
Bistability relies upon positive feedback, which can arise from cell-intrinsic currents orfrom network feedback. Synaptic facilitation is a positive feedback mechanism in circuits ofreciprocally connected excitatory cells, since the greater the mean firing rate, the greaterthe effective connection strength, further amplifying the excitatory input beyond thatproduced by the increased spike rate alone. This property of synaptic facilitation enhancesthe stability of memory states and renders them more robust to distractors . Other forms of positive feedback, such as depolarization-induced suppression ofinhibition (DSI), which depends on activity in the postsynaptic cell, can similarly producerobustness in recurrent memory networks .
When the bistability necessary for discrete memory is produced through synaptic feedback ina circuit of neurons, the relative stability to noise fluctuations of each of the two stablefixed points depends exponentially on the area between the mean neural response curve andthe synaptic feedback curve (Figs. 4a–4b). While the synaptic feedback curve is monotonic in firing rate, for staticsynapses it is either linear (in the absence of postsynaptic saturation) or of negativecurvature (decreasing gradient), with the effectiveness of additional spikes decreasing athigh rates when receptors become saturated. However, when the synapse is facilitating, thesynaptic response curve has positive curvature when firing rates are low—the effect ofeach additional spike is greater as firing rate increases. Here, we showed how such aneffect could increase the area between intersections of synaptic feedback and neuralresponse curves, enhancing stability dramatically (Figs. 4–6).
We note that the addition of positive curvature at low rates to the negative curvature athigh, saturating rates in the curve of synaptic transmission as a function of presynapticfiring rates (Fig. 3a) inevitably increases the areas betweenthree points of intersection with any firing rate curve without such an“S”-shape (Figs. 4a–4b).Since the “S”-shape is a hallmark of synaptic facilitation, not present forsynaptic transmission through static synapses, facilitation can always enhance stability ofsuch bistable systems. Less mathematically, a facilitating synapse with the same effectivestrength as a static synapse at intermediate firing rates is stronger at high firing rates,enhancing the stability of a high-activity state (where a drop in synaptic transmission isdetrimental), while at the same time is weaker at low firing rates, enhancing the stabilityof a low-activity state (where a rise of synaptic transmission is detrimental).
It is worth pointing out the converse—that short-term synaptic depression reduces therobustness of such discrete attractors. Indeed, in Fig. 5, weshow that the range of parameters for which a bistable system exists is much narrower whensynapses are depressing (D) versus static (S) or facilitating (F). Since synaptic depressioncontributes a negative curvature to the f-I curve, it tends to reduce the“S-shape” needed for bistability. Or, perhaps more intuitively, high synapticstrength is needed to maintain a high-firing rate state if synapses are depressing, but suchhigh synaptic strength is more likely to render the low-firing rate spontaneous stateunstable.
The changes in the shape of the distribution of inter-release intervals caused by dynamicsynapses alter the fluctuations in post-synaptic conductance. In particular, facilitationenhances the variability and depression reduces the variability arising from a Poisson spiketrain. While the extra variability caused by facilitating synapses tends to destabilize amemory system, this effect was overwhelmed by the increase in stability due to therate-dependent changes in mean synaptic transmission described above. However, the increasein conductance variability, in particular, being on a slower timescale than membranepotential fluctuations, can be a factor in explaining the high CV of neural spiketrains.
Our calculations are based on a simplified formalism, in which the firing-rate curve (f-Icurve) of a neuron is first assumed or fit (Eq. (34), ) under in vivo-like conditions, assuming a given level of noise in the membranepotential. Since the shape of the f-I curve depends on both the mean and variance of theinput current [25, 26], it might appear invalid to discuss changes in the variability of input currentdue to dynamic synapses in the context of a fixed f-I curve. However, the time constants forshort-term synaptic plasticity and the NMDA receptor-mediated currents are more than anorder of magnitude greater than the time constant of the membrane potential under theconditions of strong, fluctuating balanced input that produce the irregularity of spiketrains seen in vivo. Since the neuron’s membrane potential can sample its probabilitydistribution—which determines the likelihood of a spike per unit time—morerapidly than the timescale for changes in that probability distribution, our analyticmethods provide a reasonable description of the circuit’s behavior (Fig. 4d).
In summary, we have demonstrated the ability of short-term synaptic facilitation tostabilize discrete attractor states of neural activity to noise. We have shown this bysimulations and through analytic methods, which include a consideration of how stochasticdynamic synapses mold the distribution of interrelease intervals (IRIs) into a form thatdiffers from the exponential distribution of incoming interspike intervals (ISIs). Thealtered IRI distribution affects both mean synaptic transmission and the variability oftransmission due to a presynaptic Poisson spike train—both of which have a strongimpact on the stability of memory states. The increased variability of synaptic transmissiondue to facilitation is more than countered by the effect of facilitation on mean synaptictransmission, which enhances the robustness of bistability, leading to stable memory stateswith fewer neurons.
Lorente de Nó R: Vestibulo-ocular reflex arc. Arch Neurol Psych 1933, 30: 245–291. 10.1001/archneurpsyc.1933.02240140009001
Funahashi S, Bruce CJ, Goldman-Rakic PS: Neuronal activity related to saccadic eye movements in the monkey’s dorsolateralprefrontal cortex. J Neurophysiol 1991, 65: 1464–1483.
Hebb DO: Organization of Behavior. Wiley, New York; 1949.
Brunel N, Nadal JP: Modeling memory: what do we learn from attractor neural networks? C R Acad Sci, Sér 3 Sci Vie 1998, 321: 249–252.
Hopfield JJ: Neural networks and physical systems with emergent collective computationalabilities. Proc Natl Acad Sci USA 1982, 79: 2554–2558. 10.1073/pnas.79.8.2554
Hopfield JJ: Neurons with graded response have collective computational properties like those oftwo-state neurons. Proc Natl Acad Sci USA 1984, 81: 3088–3092. 10.1073/pnas.81.10.3088
Zhang K: Representation of spatial orientation by the intrinsic dynamics of the head-directioncell ensembles: a theory. J Neurosci 1996, 16: 2112–2126.
Compte A, Brunel N, Goldman-Rakic PS, Wang XJ: Synaptic mechanisms and network dynamics underlying spatial working memory in acortical network model. Cereb Cortex 2000, 10: 910–923. 10.1093/cercor/10.9.910
Bourjaily MA, Miller P: Dynamic afferent synapses to decision-making networks improve performance in tasksrequiring stimulus associations and discriminations. J Neurophysiol 2012, 108: 513–527. 10.1152/jn.00806.2011
Lindner B, Gangloff D, Longtin A, Lewis JE: Broadband coding with dynamic synapses. J Neurosci 2009, 29: 2076–2088. 10.1523/JNEUROSCI.3702-08.2009
Rotman Z, Deng PY, Klyachko VA: Short-term plasticity optimizes synaptic information transmission. J Neurosci 2011, 31: 14800–14809. 10.1523/JNEUROSCI.3231-11.2011
Miller P, Wang XJ: Stability of discrete memory states to stochastic fluctuations in neuronal systems. Chaos 2006., 16: Article ID 026110 Article ID 026110
Koulakov AA: Properties of synaptic transmission and the global stability of delayed activitystates. Network 2001, 12: 47–74.
Wang XJ: Synaptic basis of cortical persistent activity: the importance of NMDA receptors toworking memory. J Neurosci 1999, 19: 9587–9603.
Wang XJ: Synaptic reverberation underlying mnemonic persistent activity. Trends Neurosci 2001, 24: 455–463. 10.1016/S0166-2236(00)01868-3
Miller P, Zhabotinsky AM, Lisman JE, Wang XJ: The stability of a stochastic CaMKII switch: dependence on the number of enzymemolecules and protein turnover. PLoS Biol 2005., 3: Article ID e107 Article ID e107
Dayan P, Abbott LF: Theoretical Neuroscience. MIT Press, Cambridge; 2001.
Rosenbaum R, Rubin J, Doiron B: Short term synaptic depression imposes a frequency dependent filter on synapticinformation transfer. PLoS Comput Biol 2012., 8: Article ID e1002557 Article ID e1002557
Kandaswamy U, Deng PY, Stevens CF, Klyachko VA: The role of presynaptic dynamics in processing of natural spike trains in hippocampalsynapses. J Neurosci 2010, 30: 15904–15914. 10.1523/JNEUROSCI.4050-10.2010
Brunel N, Wang XJ: Effects of neuromodulation in a cortical network model of object working memorydominated by recurrent inhibition. J Comput Neurosci 2001, 11: 63–85. 10.1023/A:1011204814320
Gillespie DT: Markov Processes. Academic Press, San Diego; 1992.
Abbott LF, Chance FS: Drivers and modulators from push-pull and balanced synaptic input. Prog Brain Res 2005, 149: 147–155.
Itskov V, Hansel D, Tsodyks M: Short-term facilitation may stabilize parametric working memory trace. Front Comput Neurosci 2011., 5: Article ID 40 Article ID 40
Carter E, Wang XJ: Cannabinoid-mediated disinhibition and working memory: dynamical interplay of multiplefeedback mechanisms in a continuous attractor model of prefrontal cortex. Cereb Cortex 2007, 17(Suppl 1):i16-i26. 10.1093/cercor/bhm103
Hansel D, van Vreeswijk C: How noise contributes to contrast invariance of orientation tuning in cat visualcortex. J Neurosci 2002, 22: 5118–5128.
Murphy BK, Miller KD: Multiplicative gain changes are induced by excitation or inhibition alone. J Neurosci 2003, 23: 10040–10051.
The author declares that he has no competing interest.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
About this article
- Dynamic synapses
- Stochastic processes
- Poisson process
- Persistent activity
- Short-term memory
- First-passage time