PhaseAmplitude Descriptions of Neural Oscillator Models
 Kyle CA Wedgwood^{1}Email author,
 Kevin K Lin^{2},
 Ruediger Thul^{1} and
 Stephen Coombes^{1}
DOI: 10.1186/2190856732
© K.C.A. Wedgwood et al.; licensee Springer 2013
Received: 31 July 2012
Accepted: 18 January 2013
Published: 24 January 2013
Abstract
Phase oscillators are a common starting point for the reduced description of many single neuron models that exhibit a strongly attracting limit cycle. The framework for analysing such models in response to weak perturbations is now particularly well advanced, and has allowed for the development of a theory of weakly connected neural networks. However, the strongattraction assumption may well not be the natural one for many neural oscillator models. For example, the popular conductance based Morris–Lecar model is known to respond to periodic pulsatile stimulation in a chaotic fashion that cannot be adequately described with a phase reduction. In this paper, we generalise the phase description that allows one to track the evolution of distance from the cycle as well as phase on cycle. We use a classical technique from the theory of ordinary differential equations that makes use of a moving coordinate system to analyse periodic orbits. The subsequent phaseamplitude description is shown to be very well suited to understanding the response of the oscillator to external stimuli (which are not necessarily weak). We consider a number of examples of neural oscillator models, ranging from planar through to high dimensional models, to illustrate the effectiveness of this approach in providing an improvement over the standard phasereduction technique. As an explicit application of this phaseamplitude framework, we consider in some detail the response of a generic planar model where the strongattraction assumption does not hold, and examine the response of the system to periodic pulsatile forcing. In addition, we explore how the presence of dynamical shear can lead to a chaotic response.
Keywords
Phaseamplitude Oscillator Chaos Nonweak coupling1 Introduction
One only has to look at the plethora of papers and books on the topic of phase oscillators in mathematical neuroscience to see the enormous impact that this tool from dynamical systems theory has had on the way we think about describing neurons and neural networks. Much of this work has its roots in the theory of ordinary differential equations (ODEs) and has been promoted for many years in the work of Winfree [1], Guckenheimer [2], Holmes [3], Kopell [4], Ermentrout [5] and Izhikevich [6] to name but a few. For a recent survey, we refer the reader to the book by Ermentrout and Terman [7]. At heart, the classic phase reduction approach recognises that if a high dimensional nonlinear dynamical system (as a model of a neuron) exhibits a stable limit cycle attractor then trajectories near the cycle can be projected onto the cycle.
The assumption that phase alone is enough to capture the essentials of neural response is one made more for mathematical convenience than being physiologically motivated. Indeed, for the popular type I Morris–Lecar (ML) firing model with standard parameters, direct numerical simulations with pulsatile forcing show responses that cannot be explained solely with a phase model [16]. The failure of a phase description is in itself no surprise and underlies why the community emphasises the use of the word weakly in the phrase “weakly connected neural networks”. Indeed, there are a number of potential pitfalls when applying phase reduction techniques to a system that is not in a weakly forced regime. The typical construction of the phase response curve uses only linear information about the isochrons and nonlinear effects will come into play the further we move away from the limit cycle. This problem can be diminished by taking higher order approximations to the isochrons and using this information in the construction of a higher order PRC [17]. Even using perfect information about isochrons, the phase reduction still assumes persistence of the limitcycle and instantaneous relaxation back to cycle. However, the presence of nearby invariant phasespace structures such as (unstable) fixed points and invariant manifolds may result in trajectories spending long periods of time away from the limit cycle. Moreover, strong forcing will necessarily take one away from the neighbourhood of a cycle where a phase description is expected to hold. Thus, developing a reduced description, which captures some notion of distance from cycle is a key component of any theory of forced limit cycle oscillators. The development of phaseamplitude models that better characterise the response of popular high dimensional single neuron models is precisely the topic of this paper. Given that it is a major challenge to construct an isochronal foliation we use nonisochronal phaseamplitude coordinates as a practical method for obtaining a more accurate description of neural systems. Recently, Medvedev [18] has used this approach to understand in more detail the synchronisation of linearly coupled stochastic limit cycle oscillators.
In Sect. 2, we consider a general coordinate transformation, which recasts the dynamics of a system in terms of phaseamplitude coordinates. This approach is directly taken from the classical theory for analysing periodic orbits of ODEs, originally considered for planar systems in [19], and for general systems in [20]. We advocate it here as one way to move beyond a purely phasecentric perspective. We illustrate the transformation by applying it to a range of popular neuron models. In Sect. 3, we consider how inputs to the neuron are transformed under these coordinate transformations and derive the evolution equations for the forced phaseamplitude system. This reduces to the standard phase description in the appropriate limit. Importantly, we show that the behaviour of the phaseamplitude system is much more able to capture that of the original single neuron model from which it is derived. Focusing on pulsatile forcing, we explore the conditions for neural oscillator models to exhibit shear induced chaos [16]. Finally in Sect. 4, we discuss the relevance of this work to developing a theory of network dynamics that can improve upon the standard weak coupling approach.
2 PhaseAmplitude Coordinates
and Df is the Jacobian of the vector field f evaluated along the periodic orbit u. The derivation of this system may be found in Appendix A. It is straightforward to show that ${f}_{1}(\theta ,\rho )\to 0$ as $\rho \to 0$, ${f}_{2}(\theta ,0)=0$ and that $\partial {f}_{2}(\theta ,0)/\partial \rho =0$. In the above, ${f}_{1}$ captures the shear present in the system, that is, whether the speed of θ increases or decreases dependent on the distance from cycle. A precise definition for shear is given in [22]. Additionally, $A(\theta )$ describes the θdependent rate of attraction or repulsion from cycle.
and ${\mathrm{I}}_{n}$ is the $n\times n$ identity matrix. Here, h and B describe the effect in terms of θ and ρ that the perturbations have. Details of the derivation are given in Appendix A. For planar models, $B={\mathrm{I}}_{2}$. To demonstrate the application of the above coordinate transformation, we now consider some popular single neuron models.
2.1 A 2D Conductance Based Model
Here, v is the membrane voltage, whilst w is a gating variable, describing the fraction of membrane ion channels that are open at any time. The first equation expresses Kirchoff’s current law across the cell membrane, with $I(t)$ representing a stimulus in the form of an injected current. The detailed form of the model is completed in Appendix B.1. The ML model has a very rich bifurcation structure. Roughly speaking, by varying a constant current $I(t)\equiv {I}_{0}$, one observes, in different parameter regions, dynamical regimes corresponding to sinks, limit cycles, and Hopf, saddlenode and homoclinic bifurcations, as well as combinations of the above. These scenarios are discussed in detail in [7] and [24].
In the next example, we show how the same ideas go across to higher dimensional models.
2.2 A 4D Conductance Based Model
3 Pulsatile Forcing of PhaseAmplitude Oscillators
where δ is the Dirac δfunction. This describes Tperiodic kicks to the voltage variable. Even such a simple forcing paradigm can give rise to rich dynamics [16]. For the periodically kicked ML model, shear forces can lead to chaotic dynamics as folds and horseshoes accumulate under the forcing. This means that the response of the neuron is extremely sensitive to the initial phase when the kicks occur. In terms of neural response, this means that the neuron is unreliable [27].
The behaviour of oscillators under such periodic pulsatile forcing is the subject of a number of studies; see, e.g. [27–30]. Of particular relevance here is [27], in which a qualitative reasoning of the mechanisms that bring about shear in such models is supplemented by direct numerical simulations to detect the presence of chaotic solutions. For the ML model in a parameter region close to the homoclinic regime, kicks can cause trajectories to pass near the saddlenode, and folds may occur as a result [16].
This system exhibits dynamical shear, which under certain conditions, can lead to chaotic dynamics. The shear parameter σ dictates how much trajectories are ‘sped up’ or ‘slowed down’ dependent on their distance from the limit cycle, whilst λ is the rate of attraction back to the limit cycle, which is independent of θ. Supposing that the function P is smooth but nonconstant, trajectories will be taken a variable distance from the cycle upon the application of the kick. When kicks are repeated, this geometric mechanism can lead to repeated stretching and folding of phase space. It is clear that the larger σ is in (15), the more shear is present, and the more likely we are to observe the folding effect. In a similar way, smaller values of λ mean that the shear has longer to act upon trajectories and again result in a greater likelihood of chaos. Finally, to observe chaotic response, we must ensure that the shear forces have sufficient time to act, meaning that T, the time between kicks must not be too small.
This stretching and folding action can clearly lead to the formation of Smale horseshoes, which are well known to lead to a type of chaotic behaviour. However, horseshoes may coexist with sinks, meaning the resulting chaotic dynamics would be transient. Wang and Young proved that under appropriate conditions, there is a set of T of positive Lebesgue measure for which the system experiences a stronger form of sustained, chaotic behaviour, characterised by the existence of a positive Lyapunov exponent for almost all initial conditions and the existence of a ‘strange attractor’; see, e.g. [28–30].
By comparing with the phaseamplitude dynamics described by Eqs. (8)–(9), we see that the model of shear considered in (15) is a proxy for a more general system, with ${f}_{1}(\theta ,\rho )\to \sigma \rho $, $A(\theta )\to \lambda $ and $h(\theta ,\rho )\to 0$, and $\zeta (\theta )\to P(\theta )$.
This simple model with a harmonic form for $P(\theta )$ provides insight into how strange attractors can be formed. Kicks along the isochrons or ones that map isochrons to one another will not produce strange attractors, but merely phaseshifts. What causes the stretching and folding is the variation in how far points are moved as measured in the direction transverse to the isochrons. For the linear system (15) variation in this sense is generated by any nonconstant $P(\theta )$; the larger the ratio $\sigma \epsilon /\lambda $, the larger the variation (see [16] for a recent discussion).
Although this explicit map is convenient for numerical simulations, we prefer to work with the full stroboscopic map (22)–(23), which is particularly useful for comparing and contrasting the behaviour of different planar single neuron models with arbitrary kick strength. As an indication of the presence of chaos in the dynamics resulting from this system, we evaluate the largest Lyapunov exponent of the map (22)–(23) by numerically evolving a tangent vector and computing its rate of growth (or contraction); see e.g. [32] for details.
Now that we know the relative contribution of kicks in v to kicks in $(\theta ,\rho )$, it is also useful to know where kicks actually occur in terms of θ as this will determine the contribution of a train of kicks to the $(\theta ,\rho )$ dynamics. In Figs. 11c and d, we plot the distribution of kicks as a function of θ. For the ML model, we observe that the kicks are distributed over all phases, while for FHN model there is a grouping of kicks around the region where ${P}_{2}$ is roughly zero. This means that kicks will not be felt as much in the ρ variable, and so trajectories here do not get kicked far from cycle. This helps explain why it is more difficult to generate chaotic responses in the FHN model.
After transients, we observe a $1:1$ phaselocked state for the FHN model. For a phaselocked state, small perturbations will ultimately decay as the perturbed trajectories also end up at the phaselocked state after some transient behaviour. This results in a negative largest Lyapunov exponent of −0.0515. We note the sharply peaked distribution of kick phases, which is to be expected for discretetime systems possessing a negative largest Lyapunov exponent, since such systems tend to have sinks in this case. The phaselocked state here occurs where ${P}_{2}$ is small, suggesting that trajectories stay close to the limit cycle. Since kicks do not move trajectories away from cycle, there is no possibility of folding, and hence no chaotic behaviour. For the ML model, we observe chaotic dynamics around a strange attractor, where small perturbations can grow, leading to a positive largest Lyapunov exponent of 0.6738. This time, the kicks are distributed fairly uniformly across θ, and so, some kicks will take trajectories away from the limit cycle, thus leading to shearinduced folding and chaotic behaviour.
4 Discussion
In this paper, we have used the notion of a moving orthonormal coordinate system around a limit cycle to study dynamics in a neighbourhood around it. This phaseamplitude coordinate system can be constructed for any given ODE system supporting a limit cycle. A clear advantage of the transformed description over the original one is that it allows us to gain insight into the effect of time dependent perturbations, using the notion of shear, as we have illustrated by performing case studies of popular neural models, in two and higher dimensions. Whilst this coordinate transformation does not result in any reduction in dimensionality in the system, as is the case with classical phase reduction techniques, it opens up avenues for moving away from the weak coupling limit, where $\epsilon \to 0$. Importantly, it emphasises the role of the two functions ${P}_{1}(\theta ,\rho )$ and ${P}_{2}(\theta )$ that provide more information about inputs to the system than the iPRC alone. It has been demonstrated that moderately small perturbations can exert remarkable influence on dynamics in the presence of other invariant structures [16], which cannot be captured by a phase only description. In addition, small perturbations can accumulate if the timescale of the perturbation is shorter than the timescale of attraction back to the limit cycle. This should be given particular consideration in the analysis of neural systems, where oscillators may be connected to thousands of other units, so that small inputs can quickly accumulate.
with an appropriate identification of the interaction functions ${H}_{1,2}$ in terms of the biological interaction between neurons and the single neuron functions ${P}_{1,2}$. Such phaseamplitude network models are ideally suited to describing the behaviour of the meanfield signal in networks of strongly gap junction coupled ML neurons [36, 37], which is known to vary because individual neurons make transitions between cycles of different amplitudes. Moreover, in the same network weakly coupled oscillator theory fails to explain how the synchronous state can stabilise with increasing coupling strength (predicting that it is always unstable), as observed numerically. All of the above are topics of ongoing research and will be reported upon elsewhere.
Appendix A: Derivation of the Transformed Dynamical System
Appendix B: Gallery of Models
B.1 Morris–Lecar
The function ${m}_{\mathrm{\infty}}(v)$ models the action of fast voltagegated calcium ion channels; ${v}_{\mathrm{Ca}}$ is the reversal (bias) potential for the calcium current and ${g}_{\mathrm{Ca}}$ the corresponding conductance. The functions ${\tau}_{w}(v)$ and ${w}_{\mathrm{\infty}}(v)$ similarly describe the dynamics of sloweracting potassium channels, with its own reversal potential ${v}_{\mathrm{K}}$ and conductance ${g}_{\mathrm{K}}$. The constants ${v}_{\mathrm{leak}}$ and ${g}_{\mathrm{leak}}$ characterise the leakage current that is present even when the neuron is in a quiescent state. Parameter values are $C=20.0{\text{\mu F/cm}}^{2}$, ${g}_{\mathrm{l}}=2.0{\text{mmho/cm}}^{2}$, ${g}_{\mathrm{K}}=8.0{\text{mmho/cm}}^{2}$, ${g}_{\mathrm{Ca}}=4.0{\text{mmho/cm}}^{2}$, $\varphi =0.23$, $I=39.5{\text{\mu A/cm}}^{2}$, ${v}_{\mathrm{l}}=60.0\text{mV}$, ${v}_{\mathrm{K}}=84.0\text{mV}$, ${v}_{\mathrm{Ca}}=120.0\text{mV}$, ${v}_{1}=1.2\text{mV}$, ${v}_{2}=18.0\text{mV}$, ${v}_{3}=12.0\text{mV}$, and ${v}_{4}=17.4\text{mV}$.
B.2 Reduced Connor–Stevens Model
Parameter values are $C=1{\text{\mu F/cm}}^{2}$, ${g}_{\mathrm{l}}=0.3{\text{mmho/cm}}^{2}$, ${g}_{\text{K}}=36.0{\text{mmho/cm}}^{2}$, ${g}_{\mathrm{a}}=47.7{\text{mmho/cm}}^{2}$, $I=35.0{\text{\mu A/cm}}^{2}$, ${v}_{0}=80.0\text{mV}$, ${v}_{\mathrm{a}}=75.0\text{mV}$, ${v}_{\mathrm{K}}=77.0\text{mV}$, ${v}_{\mathrm{l}}=54.4\text{mV}$, and ${v}_{\mathrm{Na}}=50.0\text{mV}$.
B.3 FitzHugh–Nagumo Model
where we use the following parameter values: $\mu =0.05$, $a=0.9$, $I=1.1$, and $b=0.5$.
Electronic Supplementary Material
List of Abbreviations
 ML:

Morris–Lecar
 FHN:

FitzHugh–Nagumo
 CS:

Connor–Stevens
 LE:

Lyapunov exponent
Declarations
Authors’ Affiliations
References
 Winfree A: The Geometry of Biological Time. 2nd edition. Springer, Berlin; 2001.View ArticleMATHGoogle Scholar
 Guckenheimer J: Isochrons and phaseless sets. J Math Biol 1975, 1: 259–273. 10.1007/BF01273747MathSciNetView ArticleMATHGoogle Scholar
 Cohen AH, Rand RH, Holmes PJ: Systems of coupled oscillators as models of central pattern generators. In Neural Control of Rhythmic Movements in Vertebrates. Wiley, New York; 1988.Google Scholar
 Kopell N, Ermentrout GB: Symmetry and phaselocking in chains of weakly coupled oscillators. Commun Pure Appl Math 1986, 39: 623–660. 10.1002/cpa.3160390504MathSciNetView ArticleMATHGoogle Scholar
 Ermentrout GB: n:m phaselocking of weakly coupled oscillators. J Math Biol 1981, 12: 327–342. 10.1007/BF00276920MathSciNetView ArticleMATHGoogle Scholar
 Izhikevich EM: Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, Cambridge; 2007.Google Scholar
 Ermentrout GB, Terman DH: Mathematical Foundations of Neuroscience. Springer, Berlin; 2010.View ArticleMATHGoogle Scholar
 Josic K, SheaBrown ET, Moehlis J: Isochron. Scholarpedia 2006., 1(8): Article ID 1361 Article ID 1361Google Scholar
 Guillamon A, Huguet G: A computational and geometric approach to phase resetting curves and surfaces. SIAM J Appl Dyn Syst 2009, 8(3):1005–1042. 10.1137/080737666MathSciNetView ArticleMATHGoogle Scholar
 Osinga HM, Moehlis J: A continuation method for computing global isochrons. SIAM J Appl Dyn Syst 2010, 9(4):1201–1228. 10.1137/090777244MathSciNetView ArticleMATHGoogle Scholar
 Mauroy A, Mezic I: On the use of Fourier averages to compute the global isochrons of (quasi)periodic dynamics. Chaos 2012., 22(3): Article ID 033112 Article ID 033112Google Scholar
 Brown E, Moehlis J, Holmes P: On the phase reduction and response dynamics of neural oscillator populations. Neural Comput 2004, 16: 673–715. 10.1162/089976604322860668View ArticleMATHGoogle Scholar
 Hoppensteadt FC, Izhikevich EM: Weakly Connected Neural Networks. Springer, Berlin; 1997.View ArticleGoogle Scholar
 Achuthan S, Canavier CC: Phaseresetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. J Neurosci 2009, 29(16):5218–5233. 10.1523/JNEUROSCI.042609.2009View ArticleGoogle Scholar
 Yoshimura K: Phase reduction of stochastic limitcycle oscillators. In Reviews of Nonlinear Dynamics and Complexity. Volume 3. Wiley, New York; 2010:59–90.View ArticleGoogle Scholar
 Lin KK, Wedgwood KCA, Coombes S, Young LS: Limitations of perturbative techniques in the analysis of rhythms and oscillations. J Math Biol 2013, 66: 139–161. 10.1007/s0028501205060MathSciNetView ArticleMATHGoogle Scholar
 Demir A, Suvak O: Quadratic approximations for the isochrons of oscillators: a general theory, advanced numerical methods and accurate phase computations. IEEE Trans ComputAided Des Integr Circuits Syst 2010, 29: 1215–1228.View ArticleGoogle Scholar
 Medvedev GS: Synchronization of coupled stochastic limit cycle oscillators. Phys Lett A 2010, 374: 1712–1720. 10.1016/j.physleta.2010.02.031View ArticleMATHGoogle Scholar
 Diliberto SP: On systems of ordinary differential equations. Annals of Mathematical Studies 20. In Contributions to the Theory of Nonlinear Oscillations. Princeton University Press, Princeton; 1950:1–38.Google Scholar
 Hale JK: Ordinary Differential Equations. Wiley, New York; 1969.MATHGoogle Scholar
 Ermentrout GB, Kopell N: Oscillator death in systems of coupled neural oscillators. SIAM J Appl Math 1990, 50: 125–146. 10.1137/0150009MathSciNetView ArticleMATHGoogle Scholar
 Ott W, Stenlund M: From limit cycles to strange attractors. Commun Math Phys 2010, 296: 215–249. 10.1007/s002200100994yMathSciNetView ArticleMATHGoogle Scholar
 Morris C, Lecar H: Voltage oscillations in the barnacle giant muscle fiber. Biophys J 1981, 35: 193–213. 10.1016/S00063495(81)847820View ArticleGoogle Scholar
 Rinzel J, Ermentrout GB: Analysis of neural excitability and oscillations. In Methods in Neuronal Modeling. 1st edition. MIT Press, Cambridge; 1989:135–169.Google Scholar
 Connor JA, Stevens CF: Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma. J Physiol 1971, 213: 31–53.View ArticleGoogle Scholar
 Kepler TB, Abbott LF, Marder E: Reduction of conductancebased neuron models. Biol Cybern 1992, 66: 381–387. 10.1007/BF00197717View ArticleMATHGoogle Scholar
 Lin KK, Young LS: Shearinduced chaos. Nonlinearity 2008, 21(5):899–922. 10.1088/09517715/21/5/002MathSciNetView ArticleMATHGoogle Scholar
 Wang Q, Young LS: Strange attractors with one direction of instability. Commun Math Phys 2001, 218: 1–97. 10.1007/s002200100379MathSciNetView ArticleMATHGoogle Scholar
 Wang Q, Young LS: From invariant curves to strange attractors. Commun Math Phys 2002, 225: 275–304. 10.1007/s002200100582MathSciNetView ArticleMATHGoogle Scholar
 Wang Q: Strange attractors in periodicallykicked limit cycles and Hopf bifurcations. Commun Math Phys 2003, 240: 509–529.View ArticleMATHGoogle Scholar
 Catllá AJ, Schaeffer DG, Witelski TP, Monson EE, Lin AL: On spiking models for synaptic activity and impulsive differential equations. SIAM Rev 2008, 50: 553–569. 10.1137/060667980MathSciNetView ArticleMATHGoogle Scholar
 Christiansen F, Rugh F: Computing Lyapunov spectra with continuous Gram–Schmidt orthonormalization. Nonlinearity 1997, 10: 1063–1072. 10.1088/09517715/10/5/004MathSciNetView ArticleMATHGoogle Scholar
 Ashwin P: Weak coupling of strongly nonlinear, weakly dissipative identical oscillators. Dyn Syst 1989, 10(3):2471–2474.Google Scholar
 Ashwin P, Dangelmayr G: Isochronicityinduced bifurcations in systems of weakly dissipative coupled oscillators. Dyn Stab Syst 2000, 15(3):263–286. 10.1080/713603745MathSciNetView ArticleMATHGoogle Scholar
 Ashwin P, Dangelmayr G: Reduced dynamics and symmetric solutions for globally coupled weakly dissipative oscillators. Dyn Syst 2005, 20(3):333–367. 10.1080/14689360500151813MathSciNetView ArticleMATHGoogle Scholar
 Han SK, Kurrer C, Kuramoto Y: Dephasing and bursting in coupled neural oscillators. Phys Rev Lett 1995, 75: 3190–3193. 10.1103/PhysRevLett.75.3190View ArticleGoogle Scholar
 Coombes S: Neuronal networks with gap junctions: a study of piecewise linear planar neuron models. SIAM J Appl Dyn Syst 2008, 7(3):1101–1129. 10.1137/070707579MathSciNetView ArticleMATHGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.