Managing heterogeneity in the study of neural oscillator dynamics
© Laing et al.; licensee Springer 2012
Received: 21 October 2011
Accepted: 28 February 2012
Published: 14 March 2012
We consider a coupled, heterogeneous population of relaxation oscillators used to model rhythmic oscillations in the pre-Bötzinger complex. By choosing specific values of the parameter used to describe the heterogeneity, sampled from the probability distribution of the values of that parameter, we show how the effects of heterogeneity can be studied in a computationally efficient manner. When more than one parameter is heterogeneous, full or sparse tensor product grids are used to select appropriate parameter values. The method allows us to effectively reduce the dimensionality of the model, and it provides a means for systematically investigating the effects of heterogeneity in coupled systems, linking ideas from uncertainty quantification to those for the study of network dynamics.
Networks of coupled oscillators have been studied for a number of years [1–7]. One motivation for these studies is that many neurons, when isolated (and possibly injected with a constant current), either periodically fire action potentials [8, 9] or periodically move between quiescence and repetitive firing (the alternation being referred to as bursting [10, 11]). In either case, the isolated neuron can be thought of as an oscillator. Neurons are typically coupled with many others via either gap junctions  or chemical synapses [13–15]; hence, a group of neurons can be thought of as a network of coupled oscillators.
As an idealisation, one might consider identical oscillators; in which case, the symmetry of the network will often determine its possible dynamics [16, 17]. However, natural systems are never ideal, and thus, it is more realistic to consider heterogeneous networks. Also, there is evidence in a number of contexts that heterogeneity within a population of neurons can be beneficial. Examples include calcium wave propagation , the synchronisation of coupled excitable units to an external drive [19, 20], and the example we study here: respiratory rhythm generation [13, 21].
One simple way to incorporate heterogeneity in a network of coupled oscillators is to select one parameter which affects the individual dynamics of each oscillator and assign a different value to this parameter for each oscillator [3, 15, 22, 23]. Doing this raises natural questions such as from which distribution should these parameter values be chosen, and what effect does this heterogeneity have on the dynamics of the network?
Furthermore, if we want to answer these questions in the most computationally efficient way, we need a procedure for selecting a (somehow) optimal representative set of parameter values from this distribution. In this paper, we will address some of these issues.
In particular, we will show how - given the distribution(s) of the parameter(s) describing the heterogeneity - the representative set of parameter values can be chosen so as to accurately incorporate the effects of the heterogeneity without having to fully simulate the entire large network of oscillators.
We investigate one particular network of coupled relaxation oscillators, derived from a model of the pre-Bötzinger complex [13, 14, 24], and show how the heterogeneity in one parameter affects its dynamics. We also show how heterogeneity in more than one parameter can be incorporated using either full or sparse tensor product grids in parameter space.
Our approach thus creates a bridge between computational techniques developed in the field of uncertainty quantification [25, 26] involving collocation and sparse grids on the one hand, and network dynamics on the other. It also helps us build accurate, reduced computational models of large coupled neuron populations.
One restriction of our method is that it applies only to states where all oscillators are synchronised (in the sense of having the same period) or at a fixed point. Synchronisation of this form typically occurs when the strength of coupling between oscillators is strong enough to overcome the tendency of non-identical oscillators to desynchronise due to their disparate frequencies [2, 3, 27] and is often the behaviour of interest [6, 13, 14, 23].
We present the model in Section 2 and show how to efficiently include parameter heterogeneity in Section 3. In Section 4, we explore how varying heterogeneity modifies bifurcations and varies the period of the collective oscillation. Sections 5 and 6 show how to deal with two and more, respectively, heterogeneous parameters. We conclude in Section 7.
2 The model
The functions and are a standard part of the Hodgkin-Huxley formalism , and synaptic communication is assumed to act instantaneously through the function . The parameter values we use initially are and .
Note that the synaptic coupling is excitatory. These parameters are the same as that used in the work of Rubin and Terman  except that they  used and , and their function had a more rapid transition from approximately 0 to 1 as V was increased. These changes in parameter values were made to speed up the numerical integration of Equations 1 and 2, and the methods presented here do not depend on the particular values of these parameters.
3 Managing heterogeneity
3.1 The continuum limit
The results for should provide a good approximation to the behaviour seen when N is large but finite, which is the realistic (although difficult to simulate) case. The continuum limit presented in this section was first introduced by Rubin and Terman , but their contribution was largely analytical, whereas ours will be largely numerical.
3.2 Stochastic Galerkin
where is the i th Legendre polynomial; this is known as a ‘polynomial chaos’ expansion . Substituting Equation 12 into Equation 9, multiplying both sides by and integrating over μ between −1 and 1, the orthogonality properties of Legendre polynomials with uniform weight allows one to obtain the ODE satisfied by . Similarly, one can use Equation 10 to obtain the ODEs governing the dynamics of . Having solved (a truncated set of) these ODEs, one could reconstruct and using Equation 12. This is referred to as the stochastic Galerkin method . However, the integrals just mentioned cannot be performed analytically. They must be calculated numerically at each time step in the integration of the ODEs for and ; this is computationally intensive. Note that the optimal choice of orthogonal polynomials is determined by the distribution of the heterogeneous parameter: for a uniform distribution, we use Legendre polynomials; for other distributions, other families of orthogonal polynomials are used [25, 26].
3.3 Stochastic collocation
An alternative, motivated by the stochastic collocation method , is to simply discretise in the μ direction, obtaining N different values, and then solve Equations 9 and 10 at each of the , using the values of to approximate the integral in Equation 11.
It is important to realize that the number (N) of neurons simulated in this approach may well be much smaller than the number of neurons in the ‘true’ system, considered to be in the thousands. Notice also that these neurons are ‘mathematically’ coupled to one another via the discretisation of the integral (Equation 11), which is an approximation of the continuum limit.
Using the values of to approximate the integral in Equation 11, we are in fact including the influence of all other neurons (an infinite number of them in the continuum limit), not just those that we have retained in our reduced approximation. We now examine how different discretisation schemes affect several different calculations.
3.3.1 Period calculation
Convergence of the error in the period with N is shown in Figure 3 (blue circles), where we see the very rapid convergence expected from a spectral method. For , the error in the period calculation using this method is dominated by errors in the numerical integration of the Equations 9 and 10 in time, rather than in the approximate evaluation of the integral in Equation 11. (The true period was calculated using the Gauss-Legendre quadrature with N significantly larger than 104 and is approximately 8.040104851819.) The rapid convergence of the Gauss-Legendre quadrature is a consequence of the fact that the function is a sufficiently smooth function of μ (see Figure 2). This smoothness will arise only when the oscillators become fully synchronised.
3.3.2 Hopf bifurcations
In this section, we have shown that a judicious choice of the values of the heterogeneous parameter, combined with a scheme for the Gaussian quadrature, allows us to calculate quantities of interest (such as the period of oscillation and the parameter value at which a Hopf bifurcation occurs) much more parsimoniously than a naive implementation of uniformly spaced values for a uniform distribution. Effectively, we have simulated the behaviour of a large network of oscillators by actually simulating a much smaller one, carefully choosing which oscillators to simulate (and how to couple them so as to also capture the effect of the omitted ones).
Having demonstrated this, we now fix and use the quadrature rule given in Equation 15. Note that our discretisation in μ can be thought of in two different ways. Firstly, we can consider the continuum limit () as the true system, whose dynamics will be close to the real system which consists of a large number of neurons. Our scheme is then an efficient way of simulating this true system. The other interpretation is that the true system consists of a large, finite number of neurons with randomly distributed parameter(s), and our scheme is a method for simulating such a system but using far fewer oscillators.
In the next section, we investigate the effects of varying , and . In a later section, we consider more than one heterogeneous parameter and show how tensor product grids and sparse tensor product grids can be used to accurately calculate the effects of further, independently distributed, heterogeneities.
4 The effects of heterogeneity
4.1 A single neuron
4.2 A coupled population of neurons
5 Two heterogeneous parameters
Now, consider the case where both and for each neuron are randomly (independently) distributed. We keep the uniform distribution for the , choosing , so that the come from a uniform distribution on . We choose the from a normal distribution with a mean of 2.8, and standard deviation σ and set . We keep 10 points in the μ direction and use the values of and from above to perform integration in the μ direction. The quantity M refers to the number of different values chosen, and we thus simulate 10M appropriately as coupled neurons.
The values of and for the different neurons are selected based on the tensor product of the vectors formed from and . Similarly, the weights in a sum of the form (Equation 15) will be formed from a tensor product of the associated with the direction and those associated with the .
An example of the and for is shown in Figure 11 (bottom).
We see that as expected, the Gauss-Hermite quadrature performs the best, with the error saturating between and . (Recalling that we are using 10 points in the μ direction, this is consistent with the idea that roughly the same number of points should be used in each random direction.) Using the Monte Carlo method, i.e. randomly choosing, the gives convergence that scales as . Uniformly sampling the inverse cumulative distribution function gives an error that appears to scale as . This is at variance with the expected scaling of for the composite midpoint rule applied to a function with a bounded second derivative, but the inverse CDF of a normal distribution (i.e.) does not have a bounded second derivative, and an error analysis of Equation 22 (not shown) predicts a scaling of , as observed.
6 Sparse grids
The process described above can obviously be generalised to more than two randomly, but independently, distributed parameters. The distribution of each parameter determines the type of quadrature which should be used in that direction, and the parameter values and weights are formed from tensor products of the underlying one-dimensional rules. However, the curse of dimensionality will restrict how many random parameters can be accurately sampled. If we use N points in each of D random dimensions, the number of neurons we need to simulate is .
We see that for fixed N, the sparse grid calculation is approximately two orders or magnitude more accurate than the full grid - implying, in turn, that the way we select the reduced number of neurons we retain to simulate the full system is critical. This relative advantage is expected to increase as the number of distributed parameters increases. As an example of the growth in the number of grid points, a level 6 calculation in 10 dimensions uses fewer than one million points, and the resulting system can be easily simulated on a desktop PC. (Note that the grid points and weights are calculated before the numerical integration starts, so the computational cost in producing data like that shown in Figure 15 is almost entirely due to numerical integration of the ODEs, which is proportional to the number of grid points, i.e. neurons, used.)
In this paper, we have presented and demonstrated the use of a computationally efficient method for systematically investigating the effects of heterogeneity in the parameters of a coupled network of neural oscillators. The method constitutes a model reduction approach: By only considering oscillators with parameter values given by roots of families of orthogonal polynomials (Legendre, Hermite, …), we can use the Gaussian quadrature to accurately evaluate the term coupling the oscillators, which can be thought of as the discretisation of an integral over the heterogeneous dimension(s).
Effectively, we are simulating the behaviour of an infinite number of oscillators by only simulating a small number of judiciously selected ones, modifying appropriately the way they are coupled. When the oscillators are synchronised, or at a fixed point, the function to be integrated is a smooth function of the heterogeneous parameter(s), and thus, convergence is very rapid. The technique is general (although subject to the restriction immediately above) and can be used when there is more than one heterogeneous parameter, via full or sparse tensor products in parameter space. For a given level of accuracy, we are simulating far fewer neurons than might naively be expected. The emphasis here has been on computational efficiency rather than a detailed investigation of parameter dependence.
The model we considered involved coupling only through the mean of a function, s, of the variable which, in the limit , can be thought of as an integral or, more generally, as a functional of . Thus, the techniques demonstrated here could also be applied to networks coupled through terms which, in the continuum limit, are integrals or functions of integrals. A simple example is diffusive coupling ; another possibility is coupling which is dependent upon the correlation between some or all of the variables. As mentioned, the technique will break down once the oscillators become desynchronised, as the dependence of state on parameter(s) will no longer be smooth. However, if the oscillators form several clusters [14, 36], it may be possible to apply the ideas presented here to each cluster, as the dependence of state on parameter(s) within each cluster should still be smooth. Ideally, this reparametrisation would be done adaptively as clusters form, in the same way that algorithms for numerical integration adapt as the solution varies . Alternatively, if a single oscillator ‘breaks away’ , the methods presented here could be used on the remaining synchronous oscillators, with the variables describing the state of the rogue oscillator also fully resolved. More generally, there are systems in which it is not necessarily the state of an oscillator that is a smooth function of the heterogeneous parameter, but the parameters describing the distribution of states[37, 38], and the ideas presented here could also be useful in this case.
The primary study with which we should compare our results is that of Rubin and Terman . They considered essentially the same model as Equations 1 and 2 but with heterogeneity only in the and, taking the continuum limit, referred to the curve in space describing the state of the neurons at any instant in time as a ‘snake’. By making various assumptions, such as an infinite separation of time scales between the dynamics of the and the , and that the dynamics of the in both the active and quiescent phases is linear, they derived an expression for the snake at one point in its periodic orbit and showed that such a snake is unique and stable. They also estimated the parameter values at which the snake ‘breaks’ and some oscillators lose synchrony. In contrast with their mainly analytical study, ours is mostly numerical and thus does not rely on any of the assumptions just mentioned. Using the techniques presented here, we were able to go beyond the work of Rubin and Terman, exploring parameter space.
Our approach can be thought of as a particular parametrisation of this snake, which takes into account the probability density of the heterogeneity parameter(s); we also showed a systematic way of extending this one-dimensional snake to two and higher dimensions. Another paper which uses some of the same ideas as presented here is that of Laing and Kevrekidis . There, the authors considered a finite network of coupled oscillators and used a polynomial chaos expansion of the same form as Equation 12. However, instead of integrating the equations for the polynomial chaos coefficients directly, they used projective integration  to do so, in an ‘equation-free’ approach  in which the equations satisfied by the polynomial chaos coefficients are never actually derived. They also chose the heterogeneous parameter values randomly from a prescribed distribution and averaged over realisations of this process in order to obtain ‘typical’ results. Similar ideas had been explored earlier by Moon et al., who considered a heterogeneous network of phase oscillators.
Assisi et al. considered a heterogeneous network of coupled neural oscillators, deriving equations of similar functional form to Equations 9 and 11. Their approach was to expand the variables in a way similar to Equation 12 but using a small number of arbitrarily chosen ‘modes’ rather than orthogonal polynomials. Their choice of modes, along with the fact that their neural model consisted of ODEs with polynomial right hand sides, allowed them to analytically derive the ODEs satisfied by the coefficients of the modes. This approach allowed them to qualitatively reproduce some of the behaviour of the network such as the formation of two clusters of oscillators. However, in the general case modes should be chosen as orthogonal polynomials, the specific forms of which are determined by the distribution of the heterogeneous parameter(s) [25, 26].
The network we considered was all-to-all coupled, and the techniques presented should be applicable to other similar systems. The only requirement is that the relationship between the heterogeneity parameter(s) and the state of the system (possibly after transients) be smooth (or possibly piecewise smooth). An interesting extension is the case when the network under consideration is not all-to-all. Then, the effects of degree distribution may affect the dynamics of individual oscillators [38, 41, 42], and if we have a way of parameterising this type of heterogeneity, it might be possible to apply the ideas presented here to such networks. Degree distribution is a discrete variable, and corresponding families of orthogonal polynomials exist for a variety of discrete random variables [25, 26].
aThese sparse grids were computed using software from http://people.sc.fsu.edu/~jburkardt/.
The works of CRL and BS were supported by the Marsden Fund Council from government funding, administered by the Royal Society of New Zealand. The works of IGK and YZ were supported by the AFOSR and the US DOE (DE-SC0005176 and DE-SC00029097).
- Baesens C, Guckenheimer J, Kim S, MacKay RS: Three coupled oscillators: mode-locking, global bifurcations and toroidal chaos. Physica D, Nonlinear Phenom 1991,49(3):387–475. 10.1016/0167-2789(91)90155-3MathSciNetView ArticleGoogle Scholar
- Matthews PC, Mirollo RE, Strogatz SH: Dynamics of a large system of coupled nonlinear oscillators. Physica D, Nonlinear Phenom 1991,52(2–3):293–331. 10.1016/0167-2789(91)90129-WMathSciNetView ArticleGoogle Scholar
- Laing CR, Kevrekidis IG: Periodically-forced finite networks of heterogeneous globally-coupled oscillators: a low-dimensional approach. Physica D, Nonlinear Phenom 2008,237(2):207–215. 10.1016/j.physd.2007.08.013MathSciNetView ArticleGoogle Scholar
- Martens EA, Laing CR, Strogatz SH: Solvable model of spiral wave chimeras. Phys Rev Lett 2010.,104(4):Google Scholar
- Ermentrout B, Pascal M, Gutkin B: The effects of spike frequency adaptation and negative feedback on the synchronization of neural oscillators. Neural Comput 2001,13(6):1285–1310. 10.1162/08997660152002861View ArticleGoogle Scholar
- Pikovsky A, Rosenblum M, Kurths J: Synchronization. Cambridge University Press, Cambridge; 2001.View ArticleGoogle Scholar
- Acebrón JA, Bonilla LL, Pérez Vicente CJ, Ritort F, Spigler R: The Kuramoto model: a simple paradigm for synchronization phenomena. Rev Mod Phys 2005, 77: 137–185. 10.1103/RevModPhys.77.137View ArticleGoogle Scholar
- Hassard B: Bifurcation of periodic solutions of the Hodgkin-Huxley model for the squid giant axon. J Theor Biol 1978,71(3):401–420. 10.1016/0022-5193(78)90168-6MathSciNetView ArticleGoogle Scholar
- Ermentrout GB, Terman D: Mathematical Foundations of Neuroscience. Springer, Heidelberg; 2010.View ArticleGoogle Scholar
- Coombes S, Bressloff PC: Bursting: The Genesis of Rhythm in the Nervous System. World Scientific, Singapore; 2005.View ArticleGoogle Scholar
- Izhikevich EM: Neural excitability, spiking and bursting. Int J Bifurc Chaos 2000,10(6):1171–1266. 10.1142/S0218127400000840MathSciNetView ArticleGoogle Scholar
- Coombes S: Neuronal networks with gap junctions: a study of piecewise linear planar neuron models. SIAM J Appl Dyn Syst 2008, 7: 1101. 10.1137/070707579MathSciNetView ArticleGoogle Scholar
- Butera RJ, Rinzel J, Smith JC: Models of respiratory rhythm generation in the pre-Bötzinger complex. II. Populations of coupled pacemaker neurons. J Neurophysiol 1999, 82: 398.Google Scholar
- Rubin J, Terman D: Synchronized activity and loss of synchrony among heterogeneous conditional oscillators. SIAM J Appl Dyn Syst 2002, 1: 146–174. 10.1137/S111111110240323XMathSciNetView ArticleGoogle Scholar
- Golomb D, Rinzel J: Dynamics of globally coupled inhibitory neurons with heterogeneity. Phys Rev E 1993,48(6):4810–4814. 10.1103/PhysRevE.48.4810View ArticleGoogle Scholar
- Golubitsky M, Stewart I, Buono PL, Collins JJ: Symmetry in locomotor central pattern generators and animal gaits. Nature 1999,401(6754):693–695. 10.1038/44416View ArticleGoogle Scholar
- Ashwin P, Swift J: The dynamics of n weakly coupled identical oscillators. J Nonlinear Sci 1992, 2: 69–108. 10.1007/BF02429852MathSciNetView ArticleGoogle Scholar
- Gosak M: Cellular diversity promotes intercellular Ca 2+ wave propagation. Biophys Chem 2009, 139: 53–56. 10.1016/j.bpc.2008.10.001View ArticleGoogle Scholar
- Pérez T, Mirasso CR, Toral R, Gunton JD: The constructive role of diversity in the global response of coupled neuron systems. Philos Trans R Soc A, Math Phys Eng Sci 2010,368(1933):5619–5632. 10.1098/rsta.2010.0264View ArticleGoogle Scholar
- Tessone CJ, Mirasso CR, Toral R, Gunton JD: Diversity-induced resonance. Phys Rev Lett 2006., 97:Google Scholar
- Purvis LK, Smith JC, Koizumi H, Butera RJ: Intrinsic bursters increase the robustness of rhythm generation in an excitatory network. J Neurophysiol 2007,97(2):1515–1526.View ArticleGoogle Scholar
- Assisi CG, Jirsa VK, Kelso JAS: Synchrony and clustering in heterogeneous networks with global coupling and parameter dispersion. Phys Rev Lett 2005., 94:Google Scholar
- White JA, Chow CC, Ritt J, Soto-Treviño C, Kopell N: Synchronization and oscillatory dynamics in heterogeneous, mutually inhibited neurons. J Comput Neurosci 1998, 5: 5–16. 10.1023/A:1008841325921View ArticleGoogle Scholar
- Butera RJ, Rinzel J, Smith JC: Models of respiratory rhythm generation in the pre-Bötzinger complex. I. Bursting pacemaker neurons. J Neurophysiol 1999, 82: 382.Google Scholar
- Xiu D: Fast numerical methods for stochastic computations: a review. Commun Comput Phys 2009,5(2–4):242–272.MathSciNetGoogle Scholar
- Xiu D, Karniadakis GE: Modeling uncertainty in flow simulations via generalized polynomial chaos. J Comput Phys 2003, 187: 137–167. 10.1016/S0021-9991(03)00092-5MathSciNetView ArticleGoogle Scholar
- Moon SJ, Ghanem R, Kevrekidis IG: Coarse graining the dynamics of coupled oscillators. Phys Rev Lett 2006., 96:Google Scholar
- Rubin JE: Bursting induced by excitatory synaptic coupling in nonidentical conditional relaxation oscillators or square-wave bursters. Phys Rev E 2006.,74(2):Google Scholar
- Dunmyre JR, Rubin JE: Optimal intrinsic dynamics for bursting in a three-cell network. SIAM J Appl Dyn Syst 2010, 9: 154–187. 10.1137/090765808MathSciNetView ArticleGoogle Scholar
- Quarteroni A, Sacco R, Saleri F: Numerical Mathematics. Springer, Heidelberg; 2007.Google Scholar
- Trefethen LN: Is Gauss quadrature better than Clenshaw-Curtis? SIAM Rev 2008, 50: 67. 10.1137/060659831MathSciNetView ArticleGoogle Scholar
- Moehlis J: Canards in a surface oxidation reaction. J Nonlinear Sci 2002,12(4):319–345. 10.1007/s00332-002-0467-3MathSciNetView ArticleGoogle Scholar
- Gerstner T, Griebel M: Numerical integration using sparse grids. Numer Algorithms 1998,18(3):209–232. 10.1023/A:1019129717644MathSciNetView ArticleGoogle Scholar
- Barthelmann V, Novak E, Ritter K: High dimensional polynomial interpolation on sparse grids. Adv Comput Math 2000, 12: 273–288. 10.1023/A:1018977404843MathSciNetView ArticleGoogle Scholar
- Smolyak SA: Quadrature and interpolation formulas for tensor products of certain classes of functions. Dokl Akad Nauk SSSR 1963, 4: 240–243.Google Scholar
- Somers D, Kopell N: Waves and synchrony in networks of oscillators of relaxation and non-relaxation type. Physica D, Nonlinear Phenom 1995,89(1–2):169–183. 10.1016/0167-2789(95)00198-0MathSciNetView ArticleGoogle Scholar
- Abrams DM, Strogatz SH: Chimera states in a ring of nonlocally coupled oscillators. Int J Bifurc Chaos 2006, 16: 21–37. 10.1142/S0218127406014551MathSciNetView ArticleGoogle Scholar
- Ko TW, Ermentrout GB: Partially locked states in coupled oscillators due to inhomogeneous coupling. Phys Rev E 2008., 78:Google Scholar
- Kevrekidis IG, Gear CW, Hyman JM, Kevrekidis PG, Runborg O, Theodoropoulos C: Equation-free, coarse-grained multiscale computation: enabling macroscopic simulators to perform system-level analysis. Commun Math Sci 2003,1(4):715–762.MathSciNetView ArticleGoogle Scholar
- Xiu D, Kevrekidis IG, Ghanem R: An equation-free, multiscale approach to uncertainty quantification. Comput Sci Eng 2005,7(3):16–23. 10.1109/MCSE.2005.46View ArticleGoogle Scholar
- Rajendran K, Kevrekidis IG: Coarse graining the dynamics of heterogeneous oscillators in networks with spectral gaps. Phys Rev E 2011., 84:Google Scholar
- Tsoumanis AC, Siettos CI, Bafas GV, Kevrekidis IG: Computations in social networks: from agent-based modeling to coarse-grained stability and bifurcation analysis. Int J Bifurc Chaos 2010,20(11):3673–3688. 10.1142/S0218127410027945MathSciNetView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.