Approximate, not Perfect Synchrony Maximizes the Downstream Effectiveness of Excitatory Neuronal Ensembles
© C. Börgers et al.; licensee Springer 2014
Received: 26 November 2013
Accepted: 28 March 2014
Published: 25 April 2014
The most basic functional role commonly ascribed to synchrony in the brain is that of amplifying excitatory neuronal signals. The reasoning is straightforward: When positive charge is injected into a leaky target neuron over a time window of positive duration, some of it will have time to leak back out before an action potential is triggered in the target, and it will in that sense be wasted. If the goal is to elicit a firing response in the target using as little charge as possible, it seems best to deliver the charge all at once, i.e., in perfect synchrony. In this article, we show that this reasoning is correct only if one assumes that the input ceases when the target crosses the firing threshold, but before it actually fires. If the input ceases later—for instance, in response to a feedback signal triggered by the firing of the target—the “most economical” way of delivering input (the way that requires the least total amount of input) is no longer precisely synchronous, but merely approximately so. If the target is a heterogeneous network, as it always is in the brain, then ceasing the input “when the target crosses the firing threshold” is not an option, because there is no single moment when the firing threshold is crossed. In this sense, precise synchrony is never optimal in the brain.
KeywordsFunction of synchrony Leakiness Coincidence detection
Synchronization of neuronal firing is widely thought to be important in brain function. Synchrony and rhythms have been hypothesized to play roles, for instance, in directing information flow [1–3], binding the activity of different neuronal assemblies , protecting signals from distractors , enhancing input sensitivity [6, 7], and enhancing the downstream effectiveness of neuronal signals [8–10].
The case is simplest and strongest for the last of these hypothesized functional roles of synchrony: By synchronizing, an ensemble of excitatory neurons can amplify its downstream effect. In fact, when positive charge is injected into a leaky target neuron over a time window of positive duration, some of it will have time to leak back out before an action potential is triggered in the target, and it will in that sense be wasted. If the goal is to elicit a firing response in the target using as little charge as possible, it seems best to deliver the charge all at once, i.e., in perfect synchrony. Leaky neurons are often said to be coincidence detectors for this reason. This reasoning is commonplace and widely accepted in neuroscience. However, we show that whether or not it is actually correct depends on how one makes it precise; with one formalization that seems particularly natural to us, it is incorrect.
We emphasize that this paper is not about rhythms; Fig. 1 is merely a motivating example. Here we focus on a single excitatory spike volley, and we study how it triggers a firing response in a target neuron. In the examples in Fig. 1, the target neurons are the I-cells. Specifically, we study the effect of tighter or looser synchrony within a single excitatory spike volley.
If the excitatory input is allowed to have the “foresight” of turning off as soon as the target crosses the firing threshold, i.e., as soon as firing becomes inevitable even without further input, then precise synchrony is indeed optimal, as the commonplace reasoning would suggest.
On the other hand, if the excitatory input lasts until the target actually fires (perhaps terminated by a feedback signal), then approximate, often quite imperfect synchrony is optimal.
Both principles can be made precise, proved, and computationally supported in numerous different ways. We will give examples of that in this article. However, intuitively the reasoning is very simple: When the input is made more synchronous, it becomes more effective at eliciting a firing response in the target, but more of it is wasted because it arrives between the time when the firing threshold is reached in the target and the time when the input turns off.
The central distinction that we draw in this paper is between maintaining the input until the target reaches its firing threshold, and maintaining the input until the target actually fires. Assuming that the input continues until the target reaches threshold, greater synchrony is more economical. However, assuming that the input continues until the target fires, or even longer, for instance until a feedback signal from the target arrives, there is an “optimally economical” degree of synchrony that is not perfect, and that can be quite far from perfect.
The E-to-I interaction in PING is an example of an excitatory signal terminated by a feedback signal from the target: The E-cells stop firing when the I-cells respond, and thereby they shut them off. In PING, therefore, approximate synchrony of the E-cells is “optimal” in the sense that the rhythm is maintained with the smallest number of E-cells firing.
There is little evidence of perfect synchrony in the brain. If synchrony is really functionally important, this begs the question why evolution did such a poor job perfecting it. Perhaps the arguments given in this article point towards an answer: Making our terms precise in one possible and, we think, very natural way, we find that imperfect synchrony is more “economical” than perfect synchrony.
In this section we introduce the model target neurons that we will use throughout the paper. For completeness, we also specify the details of the network of Fig. 1.
We frequently use linear integrate-and-fire neurons in this paper, since analysis is easiest for them. For greater biophysical realism, we also use simple (single-compartment) Hodgkin–Huxley-like model neurons, for which we report numerical results, but no analysis. The theta neuron is in between: It is still simple enough for the sort of analysis that we are interested in here, but it is more realistic than the linear integrate-and-fire neuron.
2.1 Linear Integrate-and-Fire Model
where and denote left- and right-sided limits, respectively, is the membrane time constant, and I is normalized external drive. Although the normalized membrane potential v is non-dimensional, we find it convenient to think of t and τ as quantities with the physical dimension of time, measured in ms. As a result, I is a reciprocal time.
We interpret q as the (normalized) total charge injected into the neuron. We assume that is of significant size for , but not for . (For numerical illustrations, we will use , with .) Thus the “duration” of the input pulse I is on the order of 1 ms.
Note that is of significant size for , but not for . Thus the duration of the input pulse is on the order of ε (time measured in ms). For smaller ε, the same amount of charge is delivered in a briefer time period; this is why we think of smaller ε as modeling greater synchrony of inputs. As , converges to , where δ denotes the Dirac delta function at . In this limit, the effect of the input pulse becomes an instantaneous increase in the membrane potential by q.
with the same weights . (Again, technically the right-hand side converges to the left-hand side in the distributional sense as .) Thus is approximated by the same sequence of weak input pulses as I, but the time between subsequent input pulse arrivals is ε Δ instead of Δ; that is, the input pulses arrive more synchronously when ε is smaller.
An action potential is elicited by the input if and only if .
2.2 Theta Model
The fixed point is stable and is unstable. The two fixed points collide and annihilate each other in a saddle-node bifurcation as I rises above .
The fixed point is stable and is unstable. When we refer to the theta model, we mean (12) or, equivalently, (8) and (10). To fire means to reach , or equivalently, . Ermentrout and Kopell  used .
(Note that is the stable equilibrium of Eq. (8) when .) As in Sect. 2.1, we sometimes write to make the dependence on τ explicit, and we skip the subscript ε when . Also as in Sect. 2.1, we note that for all , , and .
An action potential is elicited by the input pulse if and only if . We note that is equivalent to , since will reach ∞ in finite time as soon as it exceeds 1.
2.3 Wang–Buzsáki Model
The well-known Wang–Buzsáki (WB) neuron  is patterned after fast-firing interneurons in rat hippocampus. The ionic currents are those of the classical Hodgkin–Huxley neuron, i.e., spike-generating sodium, delayed rectifier potassium, and leak currents; we refer to  or [, Appendix 1] for all details.
2.4 A Rapid Volley of Excitatory Synaptic Inputs into a Single Target Neuron
The decay time constant of 3 ms is chosen to mimic AMPA-receptor-mediated synapses .
2.5 Reduced Traub–Miles Model
In our network model, the inhibitory cells are WB neurons, and the excitatory ones reduced Traub–Miles (RTM) neurons. The RTM model is due to Ermentrout and Kopell , patterned after a more complicated, multi-compartment model of Traub and Miles , and it is used here in the form stated in detail in [, Appendix 1]. It is a single-compartment model of a pyramidal (excitatory) cell in rat hippocampus. As for the WB neuron, the ionic currents are those of the classical Hodgkin–Huxley neuron, i.e., spike-generating sodium, delayed rectifier potassium, and leak currents.
2.6 Network Model
The drive to the j th E-cell (strictly speaking, drive density, measured in μA/cm2) is , . (The j th E-cell is labeled in Fig. 1, because the 50 I-cells are labeled first.) The drives to the I-cells are zero. There is no stochastic drive here.
The total synaptic conductances (strictly speaking, conductance densities, measured in mS/cm2) are , , , and . The conductance associated with a single -synapse, for instance, is .
The reversal potentials (measured in mV) of the excitatory and inhibitory synapses are and .
The rise and decay time constants of synaptic inhibition (measured in ms) are , , , and .
In the right panel of the figure, the extra term is added to the right-hand side of the equation governing the membrane potential of the j th E-cell, , to model tonic inhibition affecting the E-cells.
2.7 Computer Codes
Each figure in this paper is generated by a single, stand-alone Matlab program. All of these programs can be obtained by e-mail from the first author.
3 If the Excitatory Signal Ceases when the Target Crosses the Firing Threshold, Synchrony Is Optimally Efficient
We will give several settings in which the above statement can be made rigorous. Here the target is always a single neuron, not a network. In Fig. 1, the target of the excitatory input volleys is a network, namely the ensemble of I-cells. However, they are synchronized so tightly that we might as well assume a single I-cell. For a comment on the case when the target is itself a network that is not perfectly synchronous, see the Discussion.
3.1 Constant Current Input Driving a LIF Neuron
We think of Δ as the time between the individual pulses of a rapid input volley, as in Sect. 2.4. In (18) we simplify by equating the frequency of input pulses within the volley, , with the strength of a constant input current. Smaller Δ, i.e., larger input, should be thought of as modeling more synchronous input from multiple sources.
3.2 Current Input Pulse of General Shape Driving a LIF Neuron
We turn to a second way of making precise the notion that excitatory current input becomes more effective when delivered more synchronously. Consider a linear integrate-and-fire neuron subject to a positive current pulse, as described in Sect. 2.1, where the notation used here was introduced. The issue of coincidence detection is linked to leakiness, and we therefore first think about how v depends on τ.
Lemma 1 (a) Let . Then for all , . Furthermore, if , then . (b) .
This implies part (b) of the lemma. □
Lemma 2 is a strictly decreasing function of with and .
by the definition of . This concludes the derivation of (21).
Part (a) of Lemma 1 now implies that is a strictly decreasing function of ε.
We pointed out in Sect. 2.1 that in the limit as , approaches . Thus in this limit, jumps from 0 to q at time , then decays. This implies as . Part (b) of Lemma 1, combined with (22), implies as . □
Theorem 1 If , there exists an such that elicits an action potential for , but not for . If , then does not elicit an action potential for any .
Proof This immediately follows from Lemma 2. □
The theorem shows that input becomes more effective when delivered more rapidly: If a given pulse succeeds at eliciting an action potential, then the same pulse, delivered faster, will succeed as well.
3.3 Current Input Pulse of General Shape Driving a Theta Neuron
We repeat the analysis of the preceding section for a target modeled as a theta neuron. So we now consider a theta neuron, written in terms of v, subject to a positive current pulse; see Sect. 2.2. As in Sect. 3.2, we begin by analyzing the effect of leakiness on the membrane potential.
Lemma 3 (a) Let . Let be chosen so that for . Then for all , . Furthermore, if , then . (b) .
cannot exceed . This bound converges to 0 as , implying (b). □
Lemma 4 As long as is less than 1, it is a strictly decreasing function of , and as .
Proof Same as proof of Lemma 2. □
Theorem 2 If , there exists an such that for , , and for . If , then for all .
Proof This follows immediately from Lemma 4. □
Again we see that input becomes more effective when delivered more rapidly: If a given pulse succeeds at eliciting an action potential, then the same pulse, delivered faster, will succeed as well.
3.4 Sequence of Weak Instantaneous Positive Charge Injections Driving a LIF Neuron
and that the number of input pulses required to make v reach 1 decreases as Δ decreases. We omit the derivation of this unsurprising result.
3.5 Sequence of Weak Excitatory Synaptic Pulses Driving a WB Neuron
We now give our final and most realistic illustration of the principle that synchrony makes excitatory input into a target neuron optimally efficient, provided that the input is allowed to cease when the target crosses the firing threshold.
This result is in agreement with the standard reasoning about synchronization and leakiness. For a target neuron with voltage-activated currents, such as the WB neuron, it certainly is not a priori clear that this reasoning leads to a correct conclusion. However, Fig. 6 suggests that it probably does, at least for the WB neuron.
4 If the Excitatory Signal Continues Until the Target Fires, Approximate Synchrony Is Optimally Efficient
Again we present several settings in which the statement in the title can be made rigorous. However, first we discuss some results concerning the firing time of a target neuron driven by a current pulse (as in Sect. 2.1). This is useful in later subsections, and in particular it clarifies what is the essential source of the non-monotonicity found in later subsections.
4.1 The Time It Takes to Elicit an Action Potential with a Current Pulse
For , we denote by the time at which the action potential occurs in response to the input pulse (as in Sect. 2.1). This definition requires several clarifications. If elicits several action potentials, we let be the time of the earliest one. If elicits no action potential at all, we let . By “time at which the action potential occurs”, we mean the time when v reaches 1 for the LIF neuron, the time when v reaches ∞ (i.e., θ reaches ) for the theta neuron, or the time when v rises above 0 for the WB neuron.
which measures how long it takes to elicit an action potential in comparison with input duration. We note, in particular, that implies that the input pulse is essentially over long before the target fires.
and is strictly increasing for .
This implies that is a strictly increasing function of , by Lemma 1. In the limit as , becomes . An input pulse of the form , with , makes v jump above threshold instantaneously; so as . The limit of as is the finite time at which reaches 1. □
to rise from to ∞. This proves . Because (see discussion at the end of Sect. 3.3, and in particular Fig. 5), follows from the continuous dependence of on ε. Finally, (25) follows immediately from (24). □
4.2 If Input Current Ceases when the Target Fires, How Much Charge Is Injected?
4.3 If Synaptic Input Pulses Cease when the Target Fires, How Many Pulses Are Needed?
4.4 Linearly Rising Current Input Driving a LIF Neuron
i.e., , and . It is easy to argue that this approximate calculation rigorously describes the asymptotic behavior of , calculated from the model problem (27), as .
in the limit as .
In fact, the blow-up in the limit as in Fig. 12 can be verified numerically to be proportional to as well, in all three cases shown in the figure.
We return to Fig. 1. The figure shows that with greater tonic inhibition of the E-cells (right panel), the degree of synchrony among the E-cells is reduced, yet the number of participating E-cells is reduced as well. In the notation that we have used throughout this article, for larger Δ (right panel), the number of input pulses to the I-cells required to elicit firing is smaller. Section 4 explains how this comes about.
In general, in PING, the E-cells synchronize approximately, but not perfectly, when different E-cells receive different drives, or there is heterogeneity in synaptic strengths. The I-cells are therefore driven to firing by a sequence of nearly, but not perfectly synchronous input pulses. Our results show that there is an optimal level of looseness in the synchronization of the E-cells, that is, a level of looseness that allows operating the PING rhythm with the minimal number of E-cell action potentials.
In reality, when a feedback signal terminates the input, that feedback signal would not likely come at the moment when the target fires. A delay in the feedback signal amplifies our point: During the delay time, input is “wasted”, and the more synchronous the input stream, the more input is wasted.
We have concluded that perfect synchrony is optimal if the input stream is allowed to have the “foresight” of ceasing when the firing threshold is reached in the target. Note, however, that there is no well-defined “time at which the firing threshold is reached” when the target is not a single neuron, but a heterogeneous network. We therefore hypothesize that of the two principles stated in the Introduction, the second is the more relevant from the point of view of biology.
where dW denotes normalized Gaussian white noise, so that , without the extra input pulse , would be an Ornstein–Uhlenbeck process. Assume that has Gaussian distribution with mean 0 and variance , the equilibrium distribution of the Ornstein–Uhlenbeck process. Define , where is of moderate size, perhaps . If F is a strictly decreasing function of ε, then synchrony is, in this sense, “optimal”, whereas it isn’t if F has a local maximum at a positive value of ε. More realistic variations on this formalization are of course possible, using noise-driven Hodgkin–Huxley-like neurons with conductance-based inputs. We would not be surprised if F turned out to be strictly decreasing, i.e., perfect synchrony turned out to be “optimal” in this sense, but will leave the study of this issue to future work.
We summarize our surprising conclusion: The commonplace and widely accepted argument suggesting that synchrony makes excitatory inputs more effective is, at least in one very natural formalization (namely, that of Sect. 4), wrong. It is not just “slightly wrong”, but significantly so; see for instance the right-most panel in Fig. 10, which shows that, for the WB neuron, the “optimal” (in the sense of Sect. 4) duration of an input spike volley is on the order of 10 ms.
The authors were supported in part by the Collaborative Research in Computational Neuroscience (CRCNS) program through NIH grant 1R01 NS067199.
- Salinas E, Sejnowski TJ: Correlated neuronal activity and the flow of neural information. Nat Rev Neurosci 2001, 2: 539–544. 10.1038/35086012View ArticleGoogle Scholar
- Fries P: A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends Cogn Sci 2005, 9: 474–480. 10.1016/j.tics.2005.08.011View ArticleGoogle Scholar
- Fries P: Neuronal gamma-band synchronization as a fundamental process in cortical computation. Annu Rev Neurosci 2009, 32: 209–224. 10.1146/annurev.neuro.051508.135603View ArticleGoogle Scholar
- Gray CM: The temporal correlation hypothesis of visual feature integration: still alive and well. Neuron 1999, 24(1):31–47. 10.1016/S0896-6273(00)80820-XView ArticleGoogle Scholar
- Börgers C, Kopell N: Gamma oscillations and stimulus selection. Neural Comput 2008, 20(2):383–414. 10.1162/neco.2007.07-06-289MATHMathSciNetView ArticleGoogle Scholar
- Börgers C, Epstein S, Kopell N: Background gamma rhythmicity and attention in cortical local circuits: a computational study. Proc Natl Acad Sci USA 2005, 102(19):7002–7007. 10.1073/pnas.0502366102View ArticleGoogle Scholar
- Chen Y, Zhang H, Wang H, Yu L, Chen Y: The role of coincidence-detector neurons in the reliability and precision of subthreshold signal detection in noise. PLoS ONE 2013., 8(2): Article ID 56822 Article ID 56822Google Scholar
- König P, Engel AK, Singer W: Integrator or coincidence detector? The role of the cortical neuron revisited. Trends Neurosci 1996, 19(4):130–137. 10.1016/S0166-2236(96)80019-1View ArticleGoogle Scholar
- Roy SA, Alloway KD: Coincidence detection or temporal integration? What the neurons in somatosensory cortex are doing. J Neurosci 2001, 21(7):2462–2473.Google Scholar
- Azouz R, Gray CM: Adaptive coincidence detection and dynamic gain control in visual cortical neurons in vivo. Neuron 2003, 37(3):513–523. 10.1016/S0896-6273(02)01186-8View ArticleGoogle Scholar
- Kopell N, Börgers C, Pervouchine D, Malerba P, Tort ABL: Gamma and theta rhythms in biophysical models of hippocampal circuits. In Hippocampal Microcircuits: A Computational Modeler’s Resource Book. Edited by: Cutsuridis V, Graham B, Cobb S, Vida I. Springer, New York; 2010. [http://math.bu.edu/people/nk/papers.html] [http://math.bu.edu/people/nk/papers.html]Google Scholar
- Whittington MA, Traub RD, Kopell N, Ermentrout B, Buhl EH: Inhibition-based rhythms: experimental and mathematical observations on network dynamics. Int J Psychophysiol 2000, 38: 315–336. 10.1016/S0167-8760(00)00173-2View ArticleGoogle Scholar
- Ermentrout GB, Kopell N: Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J Appl Math 1986, 46: 233–253. 10.1137/0146017MATHMathSciNetView ArticleGoogle Scholar
- Wang X-J, Buzsáki G: Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model. J Neurosci 1996, 16: 6402–6413.Google Scholar
- Ermentrout GB, Kopell N: Fine structure of neural spiking and synchronization in the presence of conduction delay. Proc Natl Acad Sci USA 1998, 95: 1259–1264. 10.1073/pnas.95.3.1259View ArticleGoogle Scholar
- Traub RD, Miles R: Neuronal Networks of the Hippocampus. Cambridge University Press, Cambridge; 1991.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.