# Managing heterogeneity in the study of neural oscillator dynamics

- Carlo R Laing
^{1}Email author, - Yu Zou
^{2}, - Ben Smith
^{1, 3}and - Ioannis G Kevrekidis
^{2}

**2**:5

https://doi.org/10.1186/2190-8567-2-5

© Laing et al.; licensee Springer 2012

**Received: **21 October 2011

**Accepted: **28 February 2012

**Published: **14 March 2012

## Abstract

We consider a coupled, heterogeneous population of relaxation oscillators used to model rhythmic oscillations in the pre-Bötzinger complex. By choosing specific values of the parameter used to describe the heterogeneity, sampled from the probability distribution of the values of that parameter, we show how the effects of heterogeneity can be studied in a computationally efficient manner. When more than one parameter is heterogeneous, full or sparse tensor product grids are used to select appropriate parameter values. The method allows us to effectively reduce the dimensionality of the model, and it provides a means for systematically investigating the effects of heterogeneity in coupled systems, linking ideas from uncertainty quantification to those for the study of network dynamics.

### Keywords

heterogeneity neural oscillators pre-Bötzinger complex model reduction bifurcation computation## 1 Introduction

Networks of coupled oscillators have been studied for a number of years [1–7]. One motivation for these studies is that many neurons, when isolated (and possibly injected with a constant current), either periodically fire action potentials [8, 9] or periodically move between quiescence and repetitive firing (the alternation being referred to as bursting [10, 11]). In either case, the isolated neuron can be thought of as an oscillator. Neurons are typically coupled with many others via either gap junctions [12] or chemical synapses [13–15]; hence, a group of neurons can be thought of as a network of coupled oscillators.

As an idealisation, one might consider identical oscillators; in which case, the symmetry of the network will often determine its possible dynamics [16, 17]. However, natural systems are never ideal, and thus, it is more realistic to consider *heterogeneous* networks. Also, there is evidence in a number of contexts that heterogeneity within a population of neurons can be beneficial. Examples include calcium wave propagation [18], the synchronisation of coupled excitable units to an external drive [19, 20], and the example we study here: respiratory rhythm generation [13, 21].

One simple way to incorporate heterogeneity in a network of coupled oscillators is to select one parameter which affects the individual dynamics of each oscillator and assign a different value to this parameter for each oscillator [3, 15, 22, 23]. Doing this raises natural questions such as from which distribution should these parameter values be chosen, and what effect does this heterogeneity have on the dynamics of the network?

Furthermore, if we want to answer these questions in the most computationally efficient way, we need a procedure for selecting a (somehow) optimal representative set of parameter values from this distribution. In this paper, we will address some of these issues.

In particular, we will show how - given the distribution(s) of the parameter(s) describing the heterogeneity - the representative set of parameter values can be chosen so as to accurately incorporate the effects of the heterogeneity without having to fully simulate the entire large network of oscillators.

We investigate one particular network of coupled relaxation oscillators, derived from a model of the pre-Bötzinger complex [13, 14, 24], and show how the heterogeneity in one parameter affects its dynamics. We also show how heterogeneity in more than one parameter can be incorporated using either full or sparse tensor product grids in parameter space.

Our approach thus creates a bridge between computational techniques developed in the field of uncertainty quantification [25, 26] involving collocation and sparse grids on the one hand, and network dynamics on the other. It also helps us build accurate, reduced computational models of large coupled neuron populations.

One restriction of our method is that it applies only to states where all oscillators are synchronised (in the sense of having the same period) or at a fixed point. Synchronisation of this form typically occurs when the strength of coupling between oscillators is strong enough to overcome the tendency of non-identical oscillators to desynchronise due to their disparate frequencies [2, 3, 27] and is often the behaviour of interest [6, 13, 14, 23].

We present the model in Section 2 and show how to efficiently include parameter heterogeneity in Section 3. In Section 4, we explore how varying heterogeneity modifies bifurcations and varies the period of the collective oscillation. Sections 5 and 6 show how to deal with two and more, respectively, heterogeneous parameters. We conclude in Section 7.

## 2 The model

*i*, and ${h}_{i}$ is a channel state variable for neuron

*i*that is governing the inactivation of persistent sodium. Equations 1 and 2 were derived from the model in the works of Butera

*et al.*[13, 24] by blocking currents responsible for action potentials. A similar model with $N=2$ was considered in the work of Rubin [28], and Dunmyre and Rubin [29] considered synchronisation in the case $N=3$, where one of the neurons was quiescent, another was tonically firing, and the third one could be either quiescent, tonically firing or bursting. The neurons are all-to-all coupled via the term ${I}_{\mathrm{syn}}^{i}$; when ${g}_{\mathrm{syn}}=0$ the neurons are uncoupled. The various functions involved in the model equations are the following:

The functions $\tau (V)$${h}_{\mathrm{\infty}}(V)$ and $m(V)$ are a standard part of the Hodgkin-Huxley formalism [8], and synaptic communication is assumed to act instantaneously through the function $s(V)$. The parameter values we use initially are ${V}_{\mathrm{Na}}=50$${g}_{l}=2.4$${V}_{l}=-65$${V}_{\mathrm{syn}}=0$$C=0.21$$\u03f5=0.1$${g}_{\mathrm{syn}}=0.3$ and ${g}_{\mathrm{Na}}=2.8$.

Note that the synaptic coupling is excitatory. These parameters are the same as that used in the work of Rubin and Terman [14] except that they [14] used $\u03f5=0.01$ and ${g}_{l}=2.8$, and their function $s(V)$ had a more rapid transition from approximately 0 to 1 as *V* was increased. These changes in parameter values were made to speed up the numerical integration of Equations 1 and 2, and the methods presented here do not depend on the particular values of these parameters.

*i.e.*all neurons oscillate periodically with the same period, although the heterogeneity in the ${I}_{\mathrm{app}}^{i}$ means that each neuron follows a slightly different periodic orbit in its own $(V,h)$ phase space. (Because spiking currents have been removed in the derivation of Equations 1 and 2, these oscillations are interpreted as burst envelopes,

*i.e.*neuron

*i*is assumed to be spiking when ${V}_{i}$ is high and quiescent when ${V}_{i}$ is low.) It is this stable synchronous periodic behaviour that is of interest: In what parameter regions does it exist, and how does the period vary as parameters are varied? Butera

*et al.*[13] observed that including parameter heterogeneity in a spiking model for the pre-Bötzinger complex, it increased both the range of parameters over which bursting occurred and the range of burst frequencies (this being functionally advantageous for respiration), and this was the motivation for the study of Rubin and Terman [14].

## 3 Managing heterogeneity

### 3.1 The continuum limit

*V*and

*h*will be smooth functions of the continuous variable ${I}_{\mathrm{app}}$. We now consider this case where ${I}_{\mathrm{app}}$ is a continuous random variable with a uniform density on the interval $[10,25]$. We parametrise ${I}_{\mathrm{app}}$ as ${I}_{\mathrm{app}}={I}_{m}+{I}_{s}\mu $, where the probability density function for

*μ*is as follows:

The results for $N\to \mathrm{\infty}$ should provide a good approximation to the behaviour seen when *N* is large but finite, which is the realistic (although difficult to simulate) case. The continuum limit presented in this section was first introduced by Rubin and Terman [14], but their contribution was largely analytical, whereas ours will be largely numerical.

### 3.2 Stochastic Galerkin

*μ*, with the choice of particular polynomials determined by the probability density of

*μ*

*i.e.*the distribution of the heterogeneous parameter. For the uniform density $p(\mu )$, one would choose Legendre polynomials, written as follows:

where ${P}_{i}$ is the *i* th Legendre polynomial; this is known as a ‘polynomial chaos’ expansion [3]. Substituting Equation 12 into Equation 9, multiplying both sides by ${P}_{j}(\mu )p(\mu )$ and integrating over *μ* between −1 and 1, the orthogonality properties of Legendre polynomials with uniform weight allows one to obtain the ODE satisfied by ${a}_{j}(t)$. Similarly, one can use Equation 10 to obtain the ODEs governing the dynamics of ${b}_{j}(t)$. Having solved (a truncated set of) these ODEs, one could reconstruct $V(\mu ,t)$ and $h(\mu ,t)$ using Equation 12. This is referred to as the stochastic Galerkin method [25]. However, the integrals just mentioned cannot be performed analytically. They must be calculated numerically at each time step in the integration of the ODEs for ${a}_{i}$ and ${b}_{i}$; this is computationally intensive. Note that the optimal choice of orthogonal polynomials is determined by the distribution of the heterogeneous parameter: for a uniform distribution, we use Legendre polynomials; for other distributions, other families of orthogonal polynomials are used [25, 26].

### 3.3 Stochastic collocation

An alternative, motivated by the stochastic collocation method [25], is to simply discretise in the *μ* direction, obtaining *N* different ${\mu}_{i}$ values, and then solve Equations 9 and 10 at each of the ${\mu}_{i}$, using the values of $s(V({\mu}_{i},t))$ to approximate the integral in Equation 11.

It is important to realize that the number (*N*) of neurons simulated in this approach may well be much smaller than the number of neurons in the ‘true’ system, considered to be in the thousands. Notice also that these neurons are ‘mathematically’ coupled to one another via the discretisation of the integral (Equation 11), which is an approximation of the continuum limit.

Using the values of $s(V({\mu}_{i},t))$ to approximate the integral in Equation 11, we are in fact including the influence of *all* other neurons (an infinite number of them in the continuum limit), not just those that we have retained in our reduced approximation. We now examine how different discretisation schemes affect several different calculations.

#### 3.3.1 Period calculation

*N*values, ${\mu}_{i}$, and to solve Equations 9 and 10 at each of the ${\mu}_{i}$. Defining ${\mu}_{i}=-1+2(i-1/2)/N$ for $i=1,2,\dots ,N$, we approximate the integral in Equation 11 using the composite midpoint rule:

*N*, we plot the error in Figure 3 with red stars; the error is defined to be the absolute value of the difference between the calculated period and the true period (defined below). We see that the error scales as ${N}^{-2}$ as expected from numerical analysis [30]. (All numerical integration was performed using Matlab’s ode113 with an absolute tolerance of 10

^{−10}and a relative tolerance of 10

^{−12}.)

*N*, using the non-uniformly spaced ${\mu}_{i}$ will result in a smaller error than that obtained using uniform spacing, or that to obtain a fixed accuracy, using non-uniform spacing will require a smaller

*N*than that needed for uniform spacing.) Specifically, for a fixed

*N*, if we choose ${\mu}_{i}$ to be the

*i*th root of ${P}_{N}(\mu )$, where ${P}_{N}$ is the

*N*th Legendre polynomial, normalised so that ${P}_{N}(1)=1$, and the weights

Convergence of the error in the period with *N* is shown in Figure 3 (blue circles), where we see the very rapid convergence expected from a spectral method. For $50\lesssim N$, the error in the period calculation using this method is dominated by errors in the numerical integration of the Equations 9 and 10 in time, rather than in the approximate evaluation of the integral in Equation 11. (The true period was calculated using the Gauss-Legendre quadrature with *N* significantly larger than 10^{4} and is approximately 8.040104851819.) The rapid convergence of the Gauss-Legendre quadrature is a consequence of the fact that the function $s(V(\mu ))$ is a sufficiently smooth function of *μ* (see Figure 2). This smoothness will arise only when the oscillators become fully synchronised.

#### 3.3.2 Hopf bifurcations

*N*, the number of points used, for the two different schemes (the true value, again calculated using the Gauss-Legendre quadrature with a large

*N*, is approximately ${I}_{m}=33.1262$).

### 3.4 Summary

In this section, we have shown that a judicious choice of the values of the heterogeneous parameter, combined with a scheme for the Gaussian quadrature, allows us to calculate quantities of interest (such as the period of oscillation and the parameter value at which a Hopf bifurcation occurs) much more parsimoniously than a naive implementation of uniformly spaced ${I}_{i}$ values for a uniform distribution. Effectively, we have simulated the behaviour of a large network of oscillators by actually simulating a much smaller one, carefully choosing *which* oscillators to simulate (and how to couple them so as to also capture the effect of the omitted ones).

Having demonstrated this, we now fix $N=10$ and use the quadrature rule given in Equation 15. Note that our discretisation in *μ* can be thought of in two different ways. Firstly, we can consider the continuum limit ($N\to \mathrm{\infty}$) as the true system, whose dynamics will be close to the real system which consists of a large number of neurons. Our scheme is then an efficient way of simulating this true system. The other interpretation is that the true system consists of a large, finite number of neurons with randomly distributed parameter(s), and our scheme is a method for simulating such a system but using far fewer oscillators.

In the next section, we investigate the effects of varying ${I}_{m}$, ${I}_{s}$ and ${g}_{\mathrm{syn}}$. In a later section, we consider more than one heterogeneous parameter and show how tensor product grids and sparse tensor product grids can be used to accurately calculate the effects of further, independently distributed, heterogeneities.

## 4 The effects of heterogeneity

### 4.1 A single neuron

*i.e.*$N=1$ and ${g}_{\mathrm{syn}}=0$). The behaviour as ${I}_{m}$ is varied as shown in Figure 6 (left panel). For this range of ${I}_{m}$, there is always one fixed point, but it undergoes two Hopf bifurcations as ${I}_{m}$ is varied, leading to a family of stable periodic orbits. The period decreases monotonically with increasing ${I}_{m}$. The lower Hopf bifurcation results in a canard periodic solution [32] which very rapidly increases in amplitude as ${I}_{m}$ is increased. This is related to the separation of time scales between the

*V*dynamics (fast) and the

*h*dynamics (slow). In the left panel of Figure 6, we see that some of the neurons in the network whose behaviour is shown in Figure 1 would be quiescent

*when uncoupled*, while most would be periodically oscillating.

### 4.2 A coupled population of neurons

*i*, and the variance of the ${V}_{i}$’s is simply ${\sum}_{i=1}^{10}{w}_{i}{({V}_{i}-\overline{V})}^{2}$. (Recall that the weights ${w}_{i}$ are given in Equation 14.)

*two*parameters are varied. Figure 9 (top) shows the two curves of Hopf bifurcations in the ${I}_{m}$, ${I}_{s}$ plane for ${g}_{\mathrm{syn}}=0.3$. Increasing the ‘spread’ of the heterogeneity,

*i.e.*increasing ${I}_{s}$, increases the range of values of ${I}_{m}$ for which periodic oscillations are possible (between the Hopf bifurcations), but there may not necessarily exist stable periodic orbits over the entire range. For ${I}_{s}$ larger than about 6,

*i.e.*for very heterogeneous neurons, the synchronous behaviour created in the rightmost Hopf bifurcation shown in Figure 9 (top) breaks up as ${I}_{m}$ is decreased at constant ${I}_{s}$, leading to complex oscillations (not shown). The break-up of the synchronous behaviour always involves the neurons with the lowest values of

*μ*,

*i.e.*the lowest values of ${I}_{\mathrm{app}}$. The curve in Figure 9 (top) where synchronous behaviour breaks up was found by slowly decreasing ${I}_{m}$ at constant ${I}_{s}$ until the break-up was observed. In principle, it could be found by numerical continuation of the stable periodic orbit created in the rightmost Hopf bifurcation, monitoring the orbit’s stability.

*i.e.*weak coupling), the neurons are no longer synchronous, due to break-up as discussed. The conclusion is that, in order to obtain robust synchronous oscillations, we need moderate to large coupling (${g}_{\mathrm{syn}}$) and a not-too-heterogeneous population (${I}_{s}$ not too large). This is perhaps not surprising, but our main point here is to demonstrate how the computation of the effects of heterogeneity can easily be accelerated. We now consider more than one heterogeneous parameter.

## 5 Two heterogeneous parameters

Now, consider the case where both ${I}_{\mathrm{app}}$ and ${g}_{\mathrm{Na}}$ for each neuron are randomly (independently) distributed. We keep the uniform distribution for the ${I}_{\mathrm{app}}$, choosing ${I}_{m}=25$, ${I}_{s}=7.5$ so that the ${I}_{\mathrm{app}}$ come from a uniform distribution on $[17.5,32.5]$. We choose the ${g}_{\mathrm{Na}}$ from a normal distribution with a mean of 2.8, and standard deviation *σ* and set ${g}_{\mathrm{syn}}=0.3$. We keep 10 points in the *μ* direction and use the values of ${\mu}_{i}$ and ${w}_{i}$ from above to perform integration in the *μ* direction. The quantity *M* refers to the number of different ${g}_{\mathrm{Na}}$ values chosen, and we thus simulate 10*M* appropriately as coupled neurons.

The values of ${I}_{\mathrm{app}}$ and ${g}_{\mathrm{Na}}$ for the different neurons are selected based on the tensor product of the vectors formed from ${I}_{\mathrm{app}}$ and ${g}_{\mathrm{Na}}$. Similarly, the weights in a sum of the form (Equation 15) will be formed from a tensor product of the ${w}_{i}$ associated with the ${I}_{\mathrm{app}}$ direction and those associated with the ${g}_{\mathrm{Na}}$.

*λ*has the probability density function

*i.e.*

*λ*is normally distributed. Then, as mentioned, the continuum variables

*V*and

*h*are written in the form $V(\mu ,\lambda ,t)$ and $h(\mu ,\lambda ,t)$, respectively, and the sum in Equation 3 becomes

*μ*direction, this gives

*M*values of

*λ*from the unit normal distribution and calculate an approximation to the integral as the following:

*λ*direction are all equal to $1/M$. An example of the ${\mu}_{i}$ and ${\lambda}_{j}$ for $M=15$ is shown in Figure 11 (top). Another approach is to transform the integral to one over $[0,1]$ and use the composite midpoint rule on that new variable. Specifically, if we define

*i.e.*

*Q*is the cumulative density function for

*λ*, and then for a general function

*f*, the integral

*λ*direction. We approximate the integral

*j*th root of ${H}_{N}$; the

*N*th ‘probabilists’ Hermite polynomial’ and the weights ${v}_{j}$ are given by

An example of the ${\mu}_{i}$ and ${\lambda}_{j}$ for $M=15$ is shown in Figure 11 (bottom).

*M*is varied. (The true period was calculated using the Gauss-Hermite quadrature with a large

*M*in the ${g}_{\mathrm{Na}}$ direction.)

We see that as expected, the Gauss-Hermite quadrature performs the best, with the error saturating between $M=10$ and $M=20$. (Recalling that we are using 10 points in the *μ* direction, this is consistent with the idea that roughly the same number of points should be used in each random direction.) Using the Monte Carlo method, *i.e.* randomly choosing, the ${g}_{\mathrm{Na}}$ gives convergence that scales as ${M}^{-1/2}$. Uniformly sampling the inverse cumulative distribution function gives an error that appears to scale as ${M}^{-1}$. This is at variance with the expected scaling of ${M}^{-2}$ for the composite midpoint rule applied to a function with a bounded second derivative, but the inverse CDF of a normal distribution (*i.e.*${Q}^{-1}(z)$) does not have a bounded second derivative, and an error analysis of Equation 22 (not shown) predicts a scaling of ${M}^{-1}$, as observed.

## 6 Sparse grids

The process described above can obviously be generalised to more than two randomly, but independently, distributed parameters. The distribution of each parameter determines the type of quadrature which should be used in that direction, and the parameter values and weights are formed from tensor products of the underlying one-dimensional rules. However, the curse of dimensionality will restrict how many random parameters can be accurately sampled. If we use *N* points in each of *D* random dimensions, the number of neurons we need to simulate is ${N}^{D}$.

*f*as

*i*where the correspondence between

*i*and ${N}_{i}$ is given in the following:

*i.e.*${N}_{i}={2}^{i+1}-1$. Then, the level

*L*rule in two spatial dimensions is

*f*over the domain ${[-1,1]}^{2}$ is $A(L,2)(f)$. So for example, the level 2 rule (in 2 spatial dimensions and using Gauss-Legendre quadrature) is

^{a}Figure 14 (bottom) shows the grid for rule $A(3,2)$.

*i.e.*4 independent random dimensions. A comparison of the error in calculating the period of collective oscillation using full and sparse grids is shown in Figure 15.

We see that for fixed *N*, the sparse grid calculation is approximately two orders or magnitude more accurate than the full grid - implying, in turn, that *the way* we select the reduced number of neurons we retain to simulate the full system is critical. This relative advantage is expected to increase as the number of distributed parameters increases. As an example of the growth in the number of grid points, a level 6 calculation in 10 dimensions uses fewer than one million points, and the resulting system can be easily simulated on a desktop PC. (Note that the grid points and weights are calculated before the numerical integration starts, so the computational cost in producing data like that shown in Figure 15 is almost entirely due to numerical integration of the ODEs, which is proportional to the number of grid points, *i.e.* neurons, used.)

## 7 Discussion

In this paper, we have presented and demonstrated the use of a computationally efficient method for systematically investigating the effects of heterogeneity in the parameters of a coupled network of neural oscillators. The method constitutes a model reduction approach: By only considering oscillators with parameter values given by roots of families of orthogonal polynomials (Legendre, Hermite, …), we can use the Gaussian quadrature to accurately evaluate the term coupling the oscillators, which can be thought of as the discretisation of an integral over the heterogeneous dimension(s).

Effectively, we are simulating the behaviour of an infinite number of oscillators by only simulating a small number of judiciously selected ones, modifying appropriately the way they are coupled. When the oscillators are synchronised, or at a fixed point, the function to be integrated is a smooth function of the heterogeneous parameter(s), and thus, convergence is very rapid. The technique is general (although subject to the restriction immediately above) and can be used when there is more than one heterogeneous parameter, via full or sparse tensor products in parameter space. For a given level of accuracy, we are simulating far fewer neurons than might naively be expected. The emphasis here has been on computational efficiency rather than a detailed investigation of parameter dependence.

The model we considered involved coupling only through the mean of a function, *s*, of the variable ${V}_{i}$ which, in the limit $N\to \mathrm{\infty}$, can be thought of as an integral or, more generally, as a functional of $V(\mu )$. Thus, the techniques demonstrated here could also be applied to networks coupled through terms which, in the continuum limit, are integrals or functions of integrals. A simple example is diffusive coupling [3]; another possibility is coupling which is dependent upon the correlation between some or all of the variables. As mentioned, the technique will break down once the oscillators become desynchronised, as the dependence of state on parameter(s) will no longer be smooth. However, if the oscillators form several clusters [14, 36], it may be possible to apply the ideas presented here to each cluster, as the dependence of state on parameter(s) *within* each cluster should still be smooth. Ideally, this reparametrisation would be done adaptively as clusters form, in the same way that algorithms for numerical integration adapt as the solution varies [30]. Alternatively, if a single oscillator ‘breaks away’ [27], the methods presented here could be used on the remaining synchronous oscillators, with the variables describing the state of the rogue oscillator also fully resolved. More generally, there are systems in which it is not necessarily the *state* of an oscillator that is a smooth function of the heterogeneous parameter, but the *parameters describing the distribution of states*[37, 38], and the ideas presented here could also be useful in this case.

The primary study with which we should compare our results is that of Rubin and Terman [14]. They considered essentially the same model as Equations 1 and 2 but with heterogeneity only in the ${I}_{\mathrm{app}}$ and, taking the continuum limit, referred to the curve in $(V,h)$ space describing the state of the neurons at any instant in time as a ‘snake’. By making various assumptions, such as an infinite separation of time scales between the dynamics of the ${V}_{i}$ and the ${h}_{i}$, and that the dynamics of the ${h}_{i}$ in both the active and quiescent phases is linear, they derived an expression for the snake at one point in its periodic orbit and showed that such a snake is unique and stable. They also estimated the parameter values at which the snake ‘breaks’ and some oscillators lose synchrony. In contrast with their mainly analytical study, ours is mostly numerical and thus does not rely on any of the assumptions just mentioned. Using the techniques presented here, we were able to go beyond the work of Rubin and Terman, exploring parameter space.

Our approach can be thought of as a particular parametrisation of this snake, which takes into account the probability density of the heterogeneity parameter(s); we also showed a systematic way of extending this one-dimensional snake to two and higher dimensions. Another paper which uses some of the same ideas as presented here is that of Laing and Kevrekidis [3]. There, the authors considered a finite network of coupled oscillators and used a polynomial chaos expansion of the same form as Equation 12. However, instead of integrating the equations for the polynomial chaos coefficients directly, they used projective integration [39] to do so, in an ‘equation-free’ approach [40] in which the equations satisfied by the polynomial chaos coefficients are never actually derived. They also chose the heterogeneous parameter values randomly from a prescribed distribution and averaged over realisations of this process in order to obtain ‘typical’ results. Similar ideas had been explored earlier by Moon *et al.*[27], who considered a heterogeneous network of phase oscillators.

Assisi *et al.*[22] considered a heterogeneous network of coupled neural oscillators, deriving equations of similar functional form to Equations 9 and 11. Their approach was to expand the variables in a way similar to Equation 12 but using a small number of arbitrarily chosen ‘modes’ rather than orthogonal polynomials. Their choice of modes, along with the fact that their neural model consisted of ODEs with polynomial right hand sides, allowed them to analytically derive the ODEs satisfied by the coefficients of the modes. This approach allowed them to qualitatively reproduce some of the behaviour of the network such as the formation of two clusters of oscillators. However, in the general case modes should be chosen as orthogonal polynomials, the specific forms of which are determined by the distribution of the heterogeneous parameter(s) [25, 26].

The network we considered was all-to-all coupled, and the techniques presented should be applicable to other similar systems. The only requirement is that the relationship between the heterogeneity parameter(s) and the state of the system (possibly after transients) be smooth (or possibly piecewise smooth). An interesting extension is the case when the network under consideration is not all-to-all. Then, the effects of degree distribution may affect the dynamics of individual oscillators [38, 41, 42], and if we have a way of parameterising this type of heterogeneity, it might be possible to apply the ideas presented here to such networks. Degree distribution is a discrete variable, and corresponding families of orthogonal polynomials exist for a variety of discrete random variables [25, 26].

## End note

^{a}These sparse grids were computed using software from http://people.sc.fsu.edu/~jburkardt/.

## Declarations

### Acknowledgements

The works of CRL and BS were supported by the Marsden Fund Council from government funding, administered by the Royal Society of New Zealand. The works of IGK and YZ were supported by the AFOSR and the US DOE (DE-SC0005176 and DE-SC00029097).

## Authors’ Affiliations

## References

- Baesens C, Guckenheimer J, Kim S, MacKay RS:
**Three coupled oscillators: mode-locking, global bifurcations and toroidal chaos.***Physica D, Nonlinear Phenom*1991,**49**(3):387–475. 10.1016/0167-2789(91)90155-3MathSciNetView ArticleGoogle Scholar - Matthews PC, Mirollo RE, Strogatz SH:
**Dynamics of a large system of coupled nonlinear oscillators.***Physica D, Nonlinear Phenom*1991,**52**(2–3):293–331. 10.1016/0167-2789(91)90129-WMathSciNetView ArticleGoogle Scholar - Laing CR, Kevrekidis IG:
**Periodically-forced finite networks of heterogeneous globally-coupled oscillators: a low-dimensional approach.***Physica D, Nonlinear Phenom*2008,**237**(2):207–215. 10.1016/j.physd.2007.08.013MathSciNetView ArticleGoogle Scholar - Martens EA, Laing CR, Strogatz SH: Solvable model of spiral wave chimeras. Phys Rev Lett 2010.,104(4):Google Scholar
- Ermentrout B, Pascal M, Gutkin B:
**The effects of spike frequency adaptation and negative feedback on the synchronization of neural oscillators.***Neural Comput*2001,**13**(6):1285–1310. 10.1162/08997660152002861View ArticleGoogle Scholar - Pikovsky A, Rosenblum M, Kurths J:
*Synchronization*. Cambridge University Press, Cambridge; 2001.View ArticleGoogle Scholar - Acebrón JA, Bonilla LL, Pérez Vicente CJ, Ritort F, Spigler R:
**The Kuramoto model: a simple paradigm for synchronization phenomena.***Rev Mod Phys*2005,**77:**137–185. 10.1103/RevModPhys.77.137View ArticleGoogle Scholar - Hassard B:
**Bifurcation of periodic solutions of the Hodgkin-Huxley model for the squid giant axon.***J Theor Biol*1978,**71**(3):401–420. 10.1016/0022-5193(78)90168-6MathSciNetView ArticleGoogle Scholar - Ermentrout GB, Terman D:
*Mathematical Foundations of Neuroscience*. Springer, Heidelberg; 2010.View ArticleGoogle Scholar - Coombes S, Bressloff PC:
*Bursting: The Genesis of Rhythm in the Nervous System*. World Scientific, Singapore; 2005.View ArticleGoogle Scholar - Izhikevich EM:
**Neural excitability, spiking and bursting.***Int J Bifurc Chaos*2000,**10**(6):1171–1266. 10.1142/S0218127400000840MathSciNetView ArticleGoogle Scholar - Coombes S:
**Neuronal networks with gap junctions: a study of piecewise linear planar neuron models.***SIAM J Appl Dyn Syst*2008,**7:**1101. 10.1137/070707579MathSciNetView ArticleGoogle Scholar - Butera RJ, Rinzel J, Smith JC:
**Models of respiratory rhythm generation in the pre-Bötzinger complex. II. Populations of coupled pacemaker neurons.***J Neurophysiol*1999,**82:**398.Google Scholar - Rubin J, Terman D:
**Synchronized activity and loss of synchrony among heterogeneous conditional oscillators.***SIAM J Appl Dyn Syst*2002,**1:**146–174. 10.1137/S111111110240323XMathSciNetView ArticleGoogle Scholar - Golomb D, Rinzel J:
**Dynamics of globally coupled inhibitory neurons with heterogeneity.***Phys Rev E*1993,**48**(6):4810–4814. 10.1103/PhysRevE.48.4810View ArticleGoogle Scholar - Golubitsky M, Stewart I, Buono PL, Collins JJ:
**Symmetry in locomotor central pattern generators and animal gaits.***Nature*1999,**401**(6754):693–695. 10.1038/44416View ArticleGoogle Scholar - Ashwin P, Swift J:
**The dynamics of**n**weakly coupled identical oscillators.***J Nonlinear Sci*1992,**2:**69–108. 10.1007/BF02429852MathSciNetView ArticleGoogle Scholar - Gosak M:
**Cellular diversity promotes intercellular Ca**^{ 2+ }**wave propagation.***Biophys Chem*2009,**139:**53–56. 10.1016/j.bpc.2008.10.001View ArticleGoogle Scholar - Pérez T, Mirasso CR, Toral R, Gunton JD:
**The constructive role of diversity in the global response of coupled neuron systems.***Philos Trans R Soc A, Math Phys Eng Sci*2010,**368**(1933):5619–5632. 10.1098/rsta.2010.0264View ArticleGoogle Scholar - Tessone CJ, Mirasso CR, Toral R, Gunton JD:
**Diversity-induced resonance.***Phys Rev Lett*2006.,**97:**Google Scholar - Purvis LK, Smith JC, Koizumi H, Butera RJ:
**Intrinsic bursters increase the robustness of rhythm generation in an excitatory network.***J Neurophysiol*2007,**97**(2):1515–1526.View ArticleGoogle Scholar - Assisi CG, Jirsa VK, Kelso JAS:
**Synchrony and clustering in heterogeneous networks with global coupling and parameter dispersion.***Phys Rev Lett*2005.,**94:**Google Scholar - White JA, Chow CC, Ritt J, Soto-Treviño C, Kopell N:
**Synchronization and oscillatory dynamics in heterogeneous, mutually inhibited neurons.***J Comput Neurosci*1998,**5:**5–16. 10.1023/A:1008841325921View ArticleGoogle Scholar - Butera RJ, Rinzel J, Smith JC:
**Models of respiratory rhythm generation in the pre-Bötzinger complex. I. Bursting pacemaker neurons.***J Neurophysiol*1999,**82:**382.Google Scholar - Xiu D:
**Fast numerical methods for stochastic computations: a review.***Commun Comput Phys*2009,**5**(2–4):242–272.MathSciNetGoogle Scholar - Xiu D, Karniadakis GE:
**Modeling uncertainty in flow simulations via generalized polynomial chaos.***J Comput Phys*2003,**187:**137–167. 10.1016/S0021-9991(03)00092-5MathSciNetView ArticleGoogle Scholar - Moon SJ, Ghanem R, Kevrekidis IG:
**Coarse graining the dynamics of coupled oscillators.***Phys Rev Lett*2006.,**96:**Google Scholar - Rubin JE: Bursting induced by excitatory synaptic coupling in nonidentical conditional relaxation oscillators or square-wave bursters. Phys Rev E 2006.,74(2):Google Scholar
- Dunmyre JR, Rubin JE:
**Optimal intrinsic dynamics for bursting in a three-cell network.***SIAM J Appl Dyn Syst*2010,**9:**154–187. 10.1137/090765808MathSciNetView ArticleGoogle Scholar - Quarteroni A, Sacco R, Saleri F:
*Numerical Mathematics*. Springer, Heidelberg; 2007.Google Scholar - Trefethen LN:
**Is Gauss quadrature better than Clenshaw-Curtis?***SIAM Rev*2008,**50:**67. 10.1137/060659831MathSciNetView ArticleGoogle Scholar - Moehlis J:
**Canards in a surface oxidation reaction.***J Nonlinear Sci*2002,**12**(4):319–345. 10.1007/s00332-002-0467-3MathSciNetView ArticleGoogle Scholar - Gerstner T, Griebel M:
**Numerical integration using sparse grids.***Numer Algorithms*1998,**18**(3):209–232. 10.1023/A:1019129717644MathSciNetView ArticleGoogle Scholar - Barthelmann V, Novak E, Ritter K:
**High dimensional polynomial interpolation on sparse grids.***Adv Comput Math*2000,**12:**273–288. 10.1023/A:1018977404843MathSciNetView ArticleGoogle Scholar - Smolyak SA:
**Quadrature and interpolation formulas for tensor products of certain classes of functions.***Dokl Akad Nauk SSSR*1963,**4:**240–243.Google Scholar - Somers D, Kopell N:
**Waves and synchrony in networks of oscillators of relaxation and non-relaxation type.***Physica D, Nonlinear Phenom*1995,**89**(1–2):169–183. 10.1016/0167-2789(95)00198-0MathSciNetView ArticleGoogle Scholar - Abrams DM, Strogatz SH:
**Chimera states in a ring of nonlocally coupled oscillators.***Int J Bifurc Chaos*2006,**16:**21–37. 10.1142/S0218127406014551MathSciNetView ArticleGoogle Scholar - Ko TW, Ermentrout GB:
**Partially locked states in coupled oscillators due to inhomogeneous coupling.***Phys Rev E*2008.,**78:**Google Scholar - Kevrekidis IG, Gear CW, Hyman JM, Kevrekidis PG, Runborg O, Theodoropoulos C:
**Equation-free, coarse-grained multiscale computation: enabling macroscopic simulators to perform system-level analysis.***Commun Math Sci*2003,**1**(4):715–762.MathSciNetView ArticleGoogle Scholar - Xiu D, Kevrekidis IG, Ghanem R:
**An equation-free, multiscale approach to uncertainty quantification.***Comput Sci Eng*2005,**7**(3):16–23. 10.1109/MCSE.2005.46View ArticleGoogle Scholar - Rajendran K, Kevrekidis IG:
**Coarse graining the dynamics of heterogeneous oscillators in networks with spectral gaps.***Phys Rev E*2011.,**84:**Google Scholar - Tsoumanis AC, Siettos CI, Bafas GV, Kevrekidis IG:
**Computations in social networks: from agent-based modeling to coarse-grained stability and bifurcation analysis.***Int J Bifurc Chaos*2010,**20**(11):3673–3688. 10.1142/S0218127410027945MathSciNetView ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.