 Research
 Open Access
 Published:
PhaseAmplitude Descriptions of Neural Oscillator Models
The Journal of Mathematical Neuroscience volume 3, Article number: 2 (2013)
Abstract
Phase oscillators are a common starting point for the reduced description of many single neuron models that exhibit a strongly attracting limit cycle. The framework for analysing such models in response to weak perturbations is now particularly well advanced, and has allowed for the development of a theory of weakly connected neural networks. However, the strongattraction assumption may well not be the natural one for many neural oscillator models. For example, the popular conductance based Morris–Lecar model is known to respond to periodic pulsatile stimulation in a chaotic fashion that cannot be adequately described with a phase reduction. In this paper, we generalise the phase description that allows one to track the evolution of distance from the cycle as well as phase on cycle. We use a classical technique from the theory of ordinary differential equations that makes use of a moving coordinate system to analyse periodic orbits. The subsequent phaseamplitude description is shown to be very well suited to understanding the response of the oscillator to external stimuli (which are not necessarily weak). We consider a number of examples of neural oscillator models, ranging from planar through to high dimensional models, to illustrate the effectiveness of this approach in providing an improvement over the standard phasereduction technique. As an explicit application of this phaseamplitude framework, we consider in some detail the response of a generic planar model where the strongattraction assumption does not hold, and examine the response of the system to periodic pulsatile forcing. In addition, we explore how the presence of dynamical shear can lead to a chaotic response.
1 Introduction
One only has to look at the plethora of papers and books on the topic of phase oscillators in mathematical neuroscience to see the enormous impact that this tool from dynamical systems theory has had on the way we think about describing neurons and neural networks. Much of this work has its roots in the theory of ordinary differential equations (ODEs) and has been promoted for many years in the work of Winfree [1], Guckenheimer [2], Holmes [3], Kopell [4], Ermentrout [5] and Izhikevich [6] to name but a few. For a recent survey, we refer the reader to the book by Ermentrout and Terman [7]. At heart, the classic phase reduction approach recognises that if a high dimensional nonlinear dynamical system (as a model of a neuron) exhibits a stable limit cycle attractor then trajectories near the cycle can be projected onto the cycle.
A natural phase variable is simply the time along the cycle (from some arbitrary origin) relative to the period of oscillation. The notion of phase can even be extended off the cycle using the concept of isochrons [1]. They provide global information about the ‘latent phase’, namely the phase that will be asymptotically returned to for a trajectory with initial data within the basin of attraction of an exponentially stable periodic orbit. More technically, isochrons can be viewed as the leaves of the invariant foliation of the stable manifold of a periodic orbit [8]. In rotating frame coordinates given by phase and the leaf of the isochron foliation, the system has a skewproduct structure, i.e. the equation of the phase decouples. However, it is a major challenge to find the isochron foliation, and since it relies on the knowledge of the limit cycle it can only be found in special cases or numerically. There are now a number of complementary techniques that tackle this latter challenge, and in particular we refer the reader to work of Guillamon and Huguet [9] (using Lie symmetries) and Osinga and Moehlis [10] (exploiting numerical continuation). More recent work by Mauroy and Mezic [11] is especially appealing as it uses a simple forward integration algorithm, as illustrated in Fig. 1 for a Stuart–Landau oscillator. However, it is more common to sidestep the need for constructing global isochrons by restricting attention to a small neighbourhood of the limit cycle, where dynamics can simply be recast in the reduced form $\dot{\theta}=1$, where θ is the phase around a cycle. This reduction to a phase description gives a nice simple dynamical system, albeit one that cannot describe evolution of trajectories in phasespace that are far away from the limit cycle. However, the phase reduction formalism is useful in quantifying how a system (on or close to a cycle) responds to weak forcing, via the construction of the infinitesimal phase response curve (iPRC). For a given high dimensional conductance based model this can be solved for numerically, though for some normal form descriptions closed form solutions are also known [12]. The iPRC at a point on cycle is equal to the gradient of the (isochronal) phase at that point. This approach forms the basis for constructing models of weakly interacting oscillators, where the external forcing is pictured as a function of the phase of a firing neuron. This has led to a great deal of work on phaselocking and central pattern generation in neural circuitry (see, for example [13]). Note that the work in [9] goes beyond the notion of iPRC and introduces infinitesimal phase response surfaces (allowing evaluation of phase advancement even when the stimulus is off cycle), and see also the work in [14] on noninfinitesimal PRCs.
The assumption that phase alone is enough to capture the essentials of neural response is one made more for mathematical convenience than being physiologically motivated. Indeed, for the popular type I Morris–Lecar (ML) firing model with standard parameters, direct numerical simulations with pulsatile forcing show responses that cannot be explained solely with a phase model [16]. The failure of a phase description is in itself no surprise and underlies why the community emphasises the use of the word weakly in the phrase “weakly connected neural networks”. Indeed, there are a number of potential pitfalls when applying phase reduction techniques to a system that is not in a weakly forced regime. The typical construction of the phase response curve uses only linear information about the isochrons and nonlinear effects will come into play the further we move away from the limit cycle. This problem can be diminished by taking higher order approximations to the isochrons and using this information in the construction of a higher order PRC [17]. Even using perfect information about isochrons, the phase reduction still assumes persistence of the limitcycle and instantaneous relaxation back to cycle. However, the presence of nearby invariant phasespace structures such as (unstable) fixed points and invariant manifolds may result in trajectories spending long periods of time away from the limit cycle. Moreover, strong forcing will necessarily take one away from the neighbourhood of a cycle where a phase description is expected to hold. Thus, developing a reduced description, which captures some notion of distance from cycle is a key component of any theory of forced limit cycle oscillators. The development of phaseamplitude models that better characterise the response of popular high dimensional single neuron models is precisely the topic of this paper. Given that it is a major challenge to construct an isochronal foliation we use nonisochronal phaseamplitude coordinates as a practical method for obtaining a more accurate description of neural systems. Recently, Medvedev [18] has used this approach to understand in more detail the synchronisation of linearly coupled stochastic limit cycle oscillators.
In Sect. 2, we consider a general coordinate transformation, which recasts the dynamics of a system in terms of phaseamplitude coordinates. This approach is directly taken from the classical theory for analysing periodic orbits of ODEs, originally considered for planar systems in [19], and for general systems in [20]. We advocate it here as one way to move beyond a purely phasecentric perspective. We illustrate the transformation by applying it to a range of popular neuron models. In Sect. 3, we consider how inputs to the neuron are transformed under these coordinate transformations and derive the evolution equations for the forced phaseamplitude system. This reduces to the standard phase description in the appropriate limit. Importantly, we show that the behaviour of the phaseamplitude system is much more able to capture that of the original single neuron model from which it is derived. Focusing on pulsatile forcing, we explore the conditions for neural oscillator models to exhibit shear induced chaos [16]. Finally in Sect. 4, we discuss the relevance of this work to developing a theory of network dynamics that can improve upon the standard weak coupling approach.
2 PhaseAmplitude Coordinates
Throughout this paper, we study the dynamics prescribed by the system $\dot{x}=f(x)$, $x\in {\mathbb{R}}^{n}$, with solutions $x=x(t)$ that satisfy $x(0)={x}_{0}\in {\mathbb{R}}^{n}$. We will assume that the system admits an attracting hyperbolic periodic orbit (namely one zero Floquet exponent and the others having negative real part), with period Δ, such that $u(t)=u(t+\mathrm{\Delta})$. A phase $\theta \in [0,\mathrm{\Delta})$ is naturally defined from $\theta (u(t))=tmod\mathrm{\Delta}$. It has long been known in the dynamical systems community how to construct a coordinate system based on this notion of phase as well as a distance from cycle; see [20] for a discussion. In fact, Ermentrout and Kopell [21] made good use of this approach to derive the phaseinteraction function for networks of weakly connected limitcycle oscillators in the limit of infinitely fast attraction to cycle. However, this assumption is particularly extreme and unlikely to hold for a broad class of single neuron models. Thus, it is interesting to return to the full phaseamplitude description. In essence, the transformation to these coordinates involves setting up a moving orthonormal system around the limit cycle. One axis of this system is chosen to be in the direction of the tangent vector along the orbit, and the remaining are chosen to be orthogonal. We introduce the normalised tangent vector ξ as
The remaining coordinate axes are conveniently grouped together as the columns of an $n\times (n1)$ matrix ζ. In this case we can write an arbitrary point x as
Here, $\rho $ represents the Euclidean distance from the limit cycle. A caricature, in ${\mathbb{R}}^{2}$, of the coordinate system along an orbit segment is shown in Fig. 2. Through the use of the variable ρ, we can consider points away from the periodic orbit. Rather than being isochronal, lines of constant θ are simply straight lines that emanating from a point on the orbit in the direction of the normal. The technical details of specifying the orthonormal coordinates forming ζ are discussed in Appendix A.
Upon projecting the dynamics onto the moving orthonormal system, we obtain the dynamics of the transformed system:
where
and Df is the Jacobian of the vector field f evaluated along the periodic orbit u. The derivation of this system may be found in Appendix A. It is straightforward to show that ${f}_{1}(\theta ,\rho )\to 0$ as $\rho \to 0$, ${f}_{2}(\theta ,0)=0$ and that $\partial {f}_{2}(\theta ,0)/\partial \rho =0$. In the above, ${f}_{1}$ captures the shear present in the system, that is, whether the speed of θ increases or decreases dependent on the distance from cycle. A precise definition for shear is given in [22]. Additionally, $A(\theta )$ describes the θdependent rate of attraction or repulsion from cycle.
It is pertinent to consider where this coordinate transformation breaks down, that is, where the determinant of the Jacobian of the transformation $K=det[\partial x/\partial \theta \partial x/\partial \rho ]$ vanishes. This never vanishes oncycle (where $\rho =0$), but may do so for some $\rho =k>0$. This sets an upper bound on how far away from the limit cycle we can describe the system using these phaseamplitude coordinates. In Fig. 3, we plot the curve along which the transformation breaks down for the ML model. We observe that, for some values of θ, k is relatively smaller. The breakdown occurs where lines of constant θ cross, and thus the transformation ceases to be invertible, and these values of θ correspond to points along which the orbit has high curvature. We note that this issue is less problematical in higher dimensional models.
If we now consider the driven system,
where ε is not necessarily small, we may apply the same transformation as above to obtain the dynamics in $(\theta ,\rho )$ coordinates, where $\theta \in [0,\mathrm{\Delta})$ and $\rho \in {\mathbb{R}}^{n1}$, as
with
and ${\mathrm{I}}_{n}$ is the $n\times n$ identity matrix. Here, h and B describe the effect in terms of θ and ρ that the perturbations have. Details of the derivation are given in Appendix A. For planar models, $B={\mathrm{I}}_{2}$. To demonstrate the application of the above coordinate transformation, we now consider some popular single neuron models.
2.1 A 2D Conductance Based Model
The ML model was originally developed to describe the voltage dynamics of barnacle giant muscle fibre [23], and is now a popular modelling choice in computational neuroscience [7]. It is written as a pair of coupled nonlinear ODEs of the form
Here, v is the membrane voltage, whilst w is a gating variable, describing the fraction of membrane ion channels that are open at any time. The first equation expresses Kirchoff’s current law across the cell membrane, with $I(t)$ representing a stimulus in the form of an injected current. The detailed form of the model is completed in Appendix B.1. The ML model has a very rich bifurcation structure. Roughly speaking, by varying a constant current $I(t)\equiv {I}_{0}$, one observes, in different parameter regions, dynamical regimes corresponding to sinks, limit cycles, and Hopf, saddlenode and homoclinic bifurcations, as well as combinations of the above. These scenarios are discussed in detail in [7] and [24].
As the ML model is planar, ρ is a scalar, as are the functions A and ${f}_{1,2}$. This allows us to use the moving coordinate system to clearly visualise parts of phase space where trajectories are attracted towards the limit cycle, and parts in which they move away from it, as illustrated in Fig. 4. The functions ${f}_{1,2}$ and A, evaluated at $\rho =0.1$ are shown in Fig. 5. The evolution of θ is mostly constant, however we clearly observe portions of the trajectories where this is slowed, along which $\dot{\rho}\approx 0$. In fact, this corresponds to where trajectories pass near to the saddle node, and the dynamics stall. This occurs around $\theta =17$, and in Fig. 5 we see that both $A(\theta )$ and ${f}_{1}(\theta ,\rho )$ are indeed close to 0. The reduced velocities of trajectories here highlights the importance of considering other phase space structures in forced systems, the details of which are missed in standard phase only models. Forcing in the presence of such structures may give rise to complex and even chaotic behaviours, as we shall see in Sect. 3.
In the next example, we show how the same ideas go across to higher dimensional models.
2.2 A 4D Conductance Based Model
The Connor–Stevens (CS) model [25] is built upon the Hodgkin–Huxley formalism and comprises a fast Na^{+} current, a delayed K^{+} current, a leak current and a transient K^{+} current, termed the Acurrent. The full CS model consists of 6 equations: the membrane potential, the original Hodgkin–Huxley gating variables, and an activating and inactivating gating variable for the Acurrent. Using the method of equivalent potentials [26], we may reduce the dimensionality of the system, to include only 4 variables. The reduced system is
where
The details of the reduced CS model are completed in Appendix B.2. The solutions to the reduced CS model under the coordinate transformation may be seen in Fig. 6, whilst, in Fig. 7, we show how this solution looks in the original coordinates. As for the ML model, θ evolves approximately constantly throughout, though this evolution is sped up close to $\theta =\mathrm{\Delta}$. The trajectories of the vector ρ are more complicated, but note that there is regularity in the pattern exhibited, and that this occurs with approximately the same period as the underlying limit cycle. The damping of the amplitude of oscillations in ρ over successive periods represents the overall attraction to the limit cycle, whilst the regular behaviour of ρ represents the specific relaxation to cycle as shown in Fig. 7. Additional file 1 shows a movie of the trajectory in $(v,u,b)$ coordinates with the moving orthonormal system superimposed, as well as the solution in ρ for comparison.
3 Pulsatile Forcing of PhaseAmplitude Oscillators
We now consider a system with timedependent forcing, given by (7) with
where δ is the Dirac δfunction. This describes Tperiodic kicks to the voltage variable. Even such a simple forcing paradigm can give rise to rich dynamics [16]. For the periodically kicked ML model, shear forces can lead to chaotic dynamics as folds and horseshoes accumulate under the forcing. This means that the response of the neuron is extremely sensitive to the initial phase when the kicks occur. In terms of neural response, this means that the neuron is unreliable [27].
The behaviour of oscillators under such periodic pulsatile forcing is the subject of a number of studies; see, e.g. [27–30]. Of particular relevance here is [27], in which a qualitative reasoning of the mechanisms that bring about shear in such models is supplemented by direct numerical simulations to detect the presence of chaotic solutions. For the ML model in a parameter region close to the homoclinic regime, kicks can cause trajectories to pass near the saddlenode, and folds may occur as a result [16].
We here would like to compare full planar neural models to the simple model, studied in [27]:
This system exhibits dynamical shear, which under certain conditions, can lead to chaotic dynamics. The shear parameter σ dictates how much trajectories are ‘sped up’ or ‘slowed down’ dependent on their distance from the limit cycle, whilst λ is the rate of attraction back to the limit cycle, which is independent of θ. Supposing that the function P is smooth but nonconstant, trajectories will be taken a variable distance from the cycle upon the application of the kick. When kicks are repeated, this geometric mechanism can lead to repeated stretching and folding of phase space. It is clear that the larger σ is in (15), the more shear is present, and the more likely we are to observe the folding effect. In a similar way, smaller values of λ mean that the shear has longer to act upon trajectories and again result in a greater likelihood of chaos. Finally, to observe chaotic response, we must ensure that the shear forces have sufficient time to act, meaning that T, the time between kicks must not be too small.
This stretching and folding action can clearly lead to the formation of Smale horseshoes, which are well known to lead to a type of chaotic behaviour. However, horseshoes may coexist with sinks, meaning the resulting chaotic dynamics would be transient. Wang and Young proved that under appropriate conditions, there is a set of T of positive Lebesgue measure for which the system experiences a stronger form of sustained, chaotic behaviour, characterised by the existence of a positive Lyapunov exponent for almost all initial conditions and the existence of a ‘strange attractor’; see, e.g. [28–30].
By comparing with the phaseamplitude dynamics described by Eqs. (8)–(9), we see that the model of shear considered in (15) is a proxy for a more general system, with ${f}_{1}(\theta ,\rho )\to \sigma \rho $, $A(\theta )\to \lambda $ and $h(\theta ,\rho )\to 0$, and $\zeta (\theta )\to P(\theta )$.
To gain a deeper insight into the phenomenon of shear induced chaos, it is pertinent to study the isochrons of the limit cycle for the linear model (15), where the isochrons are simply lines with slope $\lambda /\sigma $. In Fig. 8, we depict the isochrons and stretch and fold action of shear. The bold thin grey line at $\rho =0$ is kicked, at $t={t}_{0}$, to the bold solid curve, where $P(\theta )=sin(\theta )$, as studied in [16] and this curve is allowed to evolve under the dynamics with no further kicks through the dashed curve at $t={t}_{1}$ and ultimately to the dotted curve at $t={t}_{2}$, which may be considered as evolutions of the solid curve by integer multiples of the natural period of the oscillator. Every point of the dotted curve traverses the isochron it is on at ${t}_{0}$ until ${t}_{2}$. The green marker shows an example of one such point evolving along its associated isochron. The folding effect of this is clear in the figure, and further illustrated in the video in Additional file 2.
This simple model with a harmonic form for $P(\theta )$ provides insight into how strange attractors can be formed. Kicks along the isochrons or ones that map isochrons to one another will not produce strange attractors, but merely phaseshifts. What causes the stretching and folding is the variation in how far points are moved as measured in the direction transverse to the isochrons. For the linear system (15) variation in this sense is generated by any nonconstant $P(\theta )$; the larger the ratio $\sigma \epsilon /\lambda $, the larger the variation (see [16] for a recent discussion).
The formation of chaos in the ML model is shown in Fig. 9. Here, we plot the response to periodic pulsatile forcing, given by (14), in the $(v,w)$ coordinate system. This clearly illustrates a folding of phase space around the limitcycle, and is further portrayed in the video in Additional file 3. We now show how this can be understood using phaseamplitude coordinates.
Compared to the phenomenological system (15), models written in phaseamplitude coordinates as (8)–(9) have two main differences. The intrinsic dynamics (without kicks) are nonlinear, and the kick terms appear in both equations for $\dot{\theta}$ and $\dot{\rho}$ (not just $\dot{\rho}$). Simulations of (8)–(9) for both the FHN and ML models, with $\epsilon =0.1$, show that the replacement of ${f}_{1}(\theta ,\rho )$ by σρ, dropping ${f}_{2}(\theta ,\rho )$ (which is quadratic in ρ), and setting $A(\theta )=\lambda $ does not lead to any significant qualitative change in behaviour (for a wide range of $\sigma ,\lambda >0$). We therefore conclude that, at least when the kick amplitude ε is not too large, it is more important to focus on the form of the forcing in the phaseamplitude coordinates. In what follows, we are interested in discovering the effects of different functional forms of the forcing term multiplying the δfunction, keeping other factors fixed. As examples, we choose those forcing terms given by transforming the FHN and the ML models into phaseamplitude coordinates. To find these functions, we first find the attracting limit cycle solution in the ML model (11) and FHN model (52) using a periodic boundary value problem solver and set up the orthonormal coordinate system around this limit cycle. Once the coordinate system is established, we evaluate the functions $h(\theta ,\rho )$ and $B(\theta ,\rho )$ (that appear in Eqs. (8) and (9)). For planar systems, we have simply that $B(\theta ,\rho )={\mathrm{I}}_{2}$. Using the forcing term (14), we are only considering perturbations to the voltage component of our system and thus only the first component of h, and the first column of B will make a nontrivial contribution to the dynamics. We define ${P}_{1}$ as the first component of h and ${P}_{2}$ as the first component of ζ. We wish to force each system at the same ratio of the natural frequency of the underlying periodic orbit. To ease comparison between the system with the ML forcing terms and the FHN forcing terms, we rescale $\theta \mapsto \theta /\mathrm{\Delta}$ so that $\theta \in [0,1)$ in what follows. Implementing the above choices leads to
It is important to emphasise that ${P}_{1,2}$ are determined by the underlying single neuron model (unlike in the toy model (15)). As emphasised in [31], one must take care in the treatment of the state dependent ‘jumps’ caused by the δfunctions in (16) to accommodate the discontinuous nature of θ and ρ at the time of the kick. To solve (16), we approximate $\delta (t)$ with a normalised square pulse ${\delta}_{\tau}(t)$ of the form
where $\tau \ll 1$. This means that for $(n1)T+\tau <t\le nT$, $n\in \mathbb{Z}$, the dynamics are governed by the linear system $(\dot{\theta},\dot{\rho})=(1+\sigma \rho ,\lambda \rho )$. This can be integrated to obtain the state of the system just before the arrival of the next kick, $({\theta}_{n},{\rho}_{n})\equiv (\theta (nT),\rho (nT))$, in the form
In the interval $nT<t<nT+\tau $ and using (17) we now need to solve the system of nonlinear ODEs
Rescaling time as $t=nT+\tau s$, with $0\le s\le 1$, and writing the solution $(\theta ,\rho )$ as a regular perturbation expansion in powers of τ as $(\theta (s),\rho (s))=({\theta}_{0}(s)+\tau {\theta}_{1}(s),{\rho}_{0}(s)+\tau {\rho}_{1}(s))+\cdots $ , we find after collecting terms of leading order in τ that the pair $({\theta}_{0}(s),{\rho}_{0}(s))$ is governed by
with initial conditions $({\theta}_{0}(0),{\rho}_{0}(0))=({\theta}_{n},{\rho}_{n})$. The solution $({\theta}_{0}(1),{\rho}_{0}(1))\equiv ({\theta}_{n}^{+},{\rho}_{n}^{+})$ (obtained numerically) can then be taken as initial data $(\theta (t),\rho (t))=({\theta}_{n}^{+},{\rho}_{n}^{+})$ in (18)–(19) to form the stroboscopic map
Note that this has been constructed using a matched asymptotic expansion, using (17), and is valid in the limit $\tau \to 0$. For weak forcing, where $\epsilon \ll 1$, ${P}_{1,2}$ vary slowly through the kick and can be approximated by their values at $({\theta}_{n},{\rho}_{n})$ so that to $O({\epsilon}^{2})$
Although this explicit map is convenient for numerical simulations, we prefer to work with the full stroboscopic map (22)–(23), which is particularly useful for comparing and contrasting the behaviour of different planar single neuron models with arbitrary kick strength. As an indication of the presence of chaos in the dynamics resulting from this system, we evaluate the largest Lyapunov exponent of the map (22)–(23) by numerically evolving a tangent vector and computing its rate of growth (or contraction); see e.g. [32] for details.
In Fig. 10, we compare the functions ${P}_{1,2}$ for both the FHN and the ML models. We note that ${P}_{2}$ for the FHN model is near 0 for a large set of θ, whilst the same is true for ${P}_{1}$ for the ML model. This means that kicks in the FHN model will tend to primarily cause phase shifts, whilst the same kicks in the ML model will primarily cause shifts in amplitude.
We plot in the top row of Fig. 11 the pair $({\theta}_{n},{\theta}_{n+1})$, for (24)–(25) for the FHN and ML models. For the FHN model, we find that the system has Lyapunov exponent of $0.0515<0$. For the ML model the Lyapunov exponent is $0.6738>0$. This implies that differences in the functional forms of ${P}_{1,2}$ can help to explain the generation of chaos.
Now that we know the relative contribution of kicks in v to kicks in $(\theta ,\rho )$, it is also useful to know where kicks actually occur in terms of θ as this will determine the contribution of a train of kicks to the $(\theta ,\rho )$ dynamics. In Figs. 11c and d, we plot the distribution of kicks as a function of θ. For the ML model, we observe that the kicks are distributed over all phases, while for FHN model there is a grouping of kicks around the region where ${P}_{2}$ is roughly zero. This means that kicks will not be felt as much in the ρ variable, and so trajectories here do not get kicked far from cycle. This helps explain why it is more difficult to generate chaotic responses in the FHN model.
After transients, we observe a $1:1$ phaselocked state for the FHN model. For a phaselocked state, small perturbations will ultimately decay as the perturbed trajectories also end up at the phaselocked state after some transient behaviour. This results in a negative largest Lyapunov exponent of −0.0515. We note the sharply peaked distribution of kick phases, which is to be expected for discretetime systems possessing a negative largest Lyapunov exponent, since such systems tend to have sinks in this case. The phaselocked state here occurs where ${P}_{2}$ is small, suggesting that trajectories stay close to the limit cycle. Since kicks do not move trajectories away from cycle, there is no possibility of folding, and hence no chaotic behaviour. For the ML model, we observe chaotic dynamics around a strange attractor, where small perturbations can grow, leading to a positive largest Lyapunov exponent of 0.6738. This time, the kicks are distributed fairly uniformly across θ, and so, some kicks will take trajectories away from the limit cycle, thus leading to shearinduced folding and chaotic behaviour.
4 Discussion
In this paper, we have used the notion of a moving orthonormal coordinate system around a limit cycle to study dynamics in a neighbourhood around it. This phaseamplitude coordinate system can be constructed for any given ODE system supporting a limit cycle. A clear advantage of the transformed description over the original one is that it allows us to gain insight into the effect of time dependent perturbations, using the notion of shear, as we have illustrated by performing case studies of popular neural models, in two and higher dimensions. Whilst this coordinate transformation does not result in any reduction in dimensionality in the system, as is the case with classical phase reduction techniques, it opens up avenues for moving away from the weak coupling limit, where $\epsilon \to 0$. Importantly, it emphasises the role of the two functions ${P}_{1}(\theta ,\rho )$ and ${P}_{2}(\theta )$ that provide more information about inputs to the system than the iPRC alone. It has been demonstrated that moderately small perturbations can exert remarkable influence on dynamics in the presence of other invariant structures [16], which cannot be captured by a phase only description. In addition, small perturbations can accumulate if the timescale of the perturbation is shorter than the timescale of attraction back to the limit cycle. This should be given particular consideration in the analysis of neural systems, where oscillators may be connected to thousands of other units, so that small inputs can quickly accumulate.
One natural extension of this work is to move beyond the theory of weakly coupled oscillators to develop a framework for describing neural systems as networks of phaseamplitude units. This has previously been considered for the case of weakly coupled weakly dissipative networks of nonlinear planar oscillators (modelled by small dissipative perturbations of a Hamiltonian oscillator) [33–35]. It would be interesting to develop these ideas and obtain network descriptions of the following type:
with an appropriate identification of the interaction functions ${H}_{1,2}$ in terms of the biological interaction between neurons and the single neuron functions ${P}_{1,2}$. Such phaseamplitude network models are ideally suited to describing the behaviour of the meanfield signal in networks of strongly gap junction coupled ML neurons [36, 37], which is known to vary because individual neurons make transitions between cycles of different amplitudes. Moreover, in the same network weakly coupled oscillator theory fails to explain how the synchronous state can stabilise with increasing coupling strength (predicting that it is always unstable), as observed numerically. All of the above are topics of ongoing research and will be reported upon elsewhere.
Appendix A: Derivation of the Transformed Dynamical System
Starting from
we make the transformation $x(t)=u(\theta (t))+\zeta (\theta (t))\rho (t)$, giving
We proceed by projecting (29) onto $\xi (\theta )$, using (1). The lefthand side of (29) now reads:
where ${\xi}^{T}$ denotes the transpose of ξ and the righthand side of (29) becomes
Thus,
where
and
Upon projecting both sides of (29) onto $\zeta (\theta )$, the lefthand side reads
whilst the righthand side becomes
since ${\zeta}^{T}f(u)={\zeta}^{T}\mathrm{d}u/\mathrm{d}\theta =0$ and where Df denotes the Jacobian of f. Putting together the previous two equations yields
where
It may be easily seen that ${f}_{1}(\theta ,\rho )=O(\rho )$ as $\rho \to 0$ and that ${f}_{2}(\theta ,0)=0$ and $\partial {f}_{2}(\theta ,0)/\partial \rho =0$. Overall, combining (32) and (37) we arrive at the transformed system:
In order to evaluate the functions ${f}_{1}$, ${f}_{2}$, and A for models with dimension larger than two, we need to calculate $\mathrm{d}\zeta /\mathrm{d}\theta $. Defining by ${\gamma}_{i}(\theta )$, the direction angles of $\xi (\theta )$, we have that
where the index i denotes the column entry of ζ and $x\cdot y$ denotes the dot product between vectors x and y. Defining
and
where j denotes the row index, we have
By the quotient rule for vectors we find that
and that
Overall, we have that
Appendix B: Gallery of Models
B.1 Morris–Lecar
The ML equations describe the interaction of membrane voltage with just two ionic currents: ${\mathrm{Ca}}^{2+}$ and ${\mathrm{K}}^{+}$. Membrane ion channels are selective for specific types of ions; their dynamics are modelled here by the gating variable w and the auxiliary functions ${w}_{\mathrm{\infty}}$, ${\tau}_{w}$, and ${m}_{\mathrm{\infty}}$. The latter have the form
The function ${m}_{\mathrm{\infty}}(v)$ models the action of fast voltagegated calcium ion channels; ${v}_{\mathrm{Ca}}$ is the reversal (bias) potential for the calcium current and ${g}_{\mathrm{Ca}}$ the corresponding conductance. The functions ${\tau}_{w}(v)$ and ${w}_{\mathrm{\infty}}(v)$ similarly describe the dynamics of sloweracting potassium channels, with its own reversal potential ${v}_{\mathrm{K}}$ and conductance ${g}_{\mathrm{K}}$. The constants ${v}_{\mathrm{leak}}$ and ${g}_{\mathrm{leak}}$ characterise the leakage current that is present even when the neuron is in a quiescent state. Parameter values are $C=20.0{\text{\mu F/cm}}^{2}$, ${g}_{\mathrm{l}}=2.0{\text{mmho/cm}}^{2}$, ${g}_{\mathrm{K}}=8.0{\text{mmho/cm}}^{2}$, ${g}_{\mathrm{Ca}}=4.0{\text{mmho/cm}}^{2}$, $\varphi =0.23$, $I=39.5{\text{\mu A/cm}}^{2}$, ${v}_{\mathrm{l}}=60.0\text{mV}$, ${v}_{\mathrm{K}}=84.0\text{mV}$, ${v}_{\mathrm{Ca}}=120.0\text{mV}$, ${v}_{1}=1.2\text{mV}$, ${v}_{2}=18.0\text{mV}$, ${v}_{3}=12.0\text{mV}$, and ${v}_{4}=17.4\text{mV}$.
B.2 Reduced Connor–Stevens Model
For the reduced CS model, we start with the full Hodgkin–Huxley model, with m, n, h as gating variables and use the method of equivalent potentials as treated in [26], giving rise to the following form for the function g:
where $\partial F/\partial h$ and $\partial F/\partial n$ are evaluated at $h={h}_{\mathrm{\infty}}(u)$ and $n={n}_{\mathrm{\infty}}(u)$. For the gating variables $(a,b)$, we have
Parameter values are $C=1{\text{\mu F/cm}}^{2}$, ${g}_{\mathrm{l}}=0.3{\text{mmho/cm}}^{2}$, ${g}_{\text{K}}=36.0{\text{mmho/cm}}^{2}$, ${g}_{\mathrm{a}}=47.7{\text{mmho/cm}}^{2}$, $I=35.0{\text{\mu A/cm}}^{2}$, ${v}_{0}=80.0\text{mV}$, ${v}_{\mathrm{a}}=75.0\text{mV}$, ${v}_{\mathrm{K}}=77.0\text{mV}$, ${v}_{\mathrm{l}}=54.4\text{mV}$, and ${v}_{\mathrm{Na}}=50.0\text{mV}$.
B.3 FitzHugh–Nagumo Model
The FHN model is a phenomenological model of spike generation, comprising of 2 variables. The first represents the membrane potential and includes a cubic nonlinearity, whilst the second variable is a gating variable, similar to w in the ML model, which may be thought of as a recovery variable. The system is
where we use the following parameter values: $\mu =0.05$, $a=0.9$, $I=1.1$, and $b=0.5$.
Electronic Supplementary Material
Abbreviations
 ML:

Morris–Lecar
 FHN:

FitzHugh–Nagumo
 CS:

Connor–Stevens
 LE:

Lyapunov exponent
References
 1.
Winfree A: The Geometry of Biological Time. 2nd edition. Springer, Berlin; 2001.
 2.
Guckenheimer J: Isochrons and phaseless sets. J Math Biol 1975, 1: 259–273. 10.1007/BF01273747
 3.
Cohen AH, Rand RH, Holmes PJ: Systems of coupled oscillators as models of central pattern generators. In Neural Control of Rhythmic Movements in Vertebrates. Wiley, New York; 1988.
 4.
Kopell N, Ermentrout GB: Symmetry and phaselocking in chains of weakly coupled oscillators. Commun Pure Appl Math 1986, 39: 623–660. 10.1002/cpa.3160390504
 5.
Ermentrout GB: n:m phaselocking of weakly coupled oscillators. J Math Biol 1981, 12: 327–342. 10.1007/BF00276920
 6.
Izhikevich EM: Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, Cambridge; 2007.
 7.
Ermentrout GB, Terman DH: Mathematical Foundations of Neuroscience. Springer, Berlin; 2010.
 8.
Josic K, SheaBrown ET, Moehlis J: Isochron. Scholarpedia 2006., 1(8): Article ID 1361 Article ID 1361
 9.
Guillamon A, Huguet G: A computational and geometric approach to phase resetting curves and surfaces. SIAM J Appl Dyn Syst 2009, 8(3):1005–1042. 10.1137/080737666
 10.
Osinga HM, Moehlis J: A continuation method for computing global isochrons. SIAM J Appl Dyn Syst 2010, 9(4):1201–1228. 10.1137/090777244
 11.
Mauroy A, Mezic I: On the use of Fourier averages to compute the global isochrons of (quasi)periodic dynamics. Chaos 2012., 22(3): Article ID 033112 Article ID 033112
 12.
Brown E, Moehlis J, Holmes P: On the phase reduction and response dynamics of neural oscillator populations. Neural Comput 2004, 16: 673–715. 10.1162/089976604322860668
 13.
Hoppensteadt FC, Izhikevich EM: Weakly Connected Neural Networks. Springer, Berlin; 1997.
 14.
Achuthan S, Canavier CC: Phaseresetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. J Neurosci 2009, 29(16):5218–5233. 10.1523/JNEUROSCI.042609.2009
 15.
Yoshimura K: Phase reduction of stochastic limitcycle oscillators. In Reviews of Nonlinear Dynamics and Complexity. Volume 3. Wiley, New York; 2010:59–90.
 16.
Lin KK, Wedgwood KCA, Coombes S, Young LS: Limitations of perturbative techniques in the analysis of rhythms and oscillations. J Math Biol 2013, 66: 139–161. 10.1007/s0028501205060
 17.
Demir A, Suvak O: Quadratic approximations for the isochrons of oscillators: a general theory, advanced numerical methods and accurate phase computations. IEEE Trans ComputAided Des Integr Circuits Syst 2010, 29: 1215–1228.
 18.
Medvedev GS: Synchronization of coupled stochastic limit cycle oscillators. Phys Lett A 2010, 374: 1712–1720. 10.1016/j.physleta.2010.02.031
 19.
Diliberto SP: On systems of ordinary differential equations. Annals of Mathematical Studies 20. In Contributions to the Theory of Nonlinear Oscillations. Princeton University Press, Princeton; 1950:1–38.
 20.
Hale JK: Ordinary Differential Equations. Wiley, New York; 1969.
 21.
Ermentrout GB, Kopell N: Oscillator death in systems of coupled neural oscillators. SIAM J Appl Math 1990, 50: 125–146. 10.1137/0150009
 22.
Ott W, Stenlund M: From limit cycles to strange attractors. Commun Math Phys 2010, 296: 215–249. 10.1007/s002200100994y
 23.
Morris C, Lecar H: Voltage oscillations in the barnacle giant muscle fiber. Biophys J 1981, 35: 193–213. 10.1016/S00063495(81)847820
 24.
Rinzel J, Ermentrout GB: Analysis of neural excitability and oscillations. In Methods in Neuronal Modeling. 1st edition. MIT Press, Cambridge; 1989:135–169.
 25.
Connor JA, Stevens CF: Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma. J Physiol 1971, 213: 31–53.
 26.
Kepler TB, Abbott LF, Marder E: Reduction of conductancebased neuron models. Biol Cybern 1992, 66: 381–387. 10.1007/BF00197717
 27.
Lin KK, Young LS: Shearinduced chaos. Nonlinearity 2008, 21(5):899–922. 10.1088/09517715/21/5/002
 28.
Wang Q, Young LS: Strange attractors with one direction of instability. Commun Math Phys 2001, 218: 1–97. 10.1007/s002200100379
 29.
Wang Q, Young LS: From invariant curves to strange attractors. Commun Math Phys 2002, 225: 275–304. 10.1007/s002200100582
 30.
Wang Q: Strange attractors in periodicallykicked limit cycles and Hopf bifurcations. Commun Math Phys 2003, 240: 509–529.
 31.
Catllá AJ, Schaeffer DG, Witelski TP, Monson EE, Lin AL: On spiking models for synaptic activity and impulsive differential equations. SIAM Rev 2008, 50: 553–569. 10.1137/060667980
 32.
Christiansen F, Rugh F: Computing Lyapunov spectra with continuous Gram–Schmidt orthonormalization. Nonlinearity 1997, 10: 1063–1072. 10.1088/09517715/10/5/004
 33.
Ashwin P: Weak coupling of strongly nonlinear, weakly dissipative identical oscillators. Dyn Syst 1989, 10(3):2471–2474.
 34.
Ashwin P, Dangelmayr G: Isochronicityinduced bifurcations in systems of weakly dissipative coupled oscillators. Dyn Stab Syst 2000, 15(3):263–286. 10.1080/713603745
 35.
Ashwin P, Dangelmayr G: Reduced dynamics and symmetric solutions for globally coupled weakly dissipative oscillators. Dyn Syst 2005, 20(3):333–367. 10.1080/14689360500151813
 36.
Han SK, Kurrer C, Kuramoto Y: Dephasing and bursting in coupled neural oscillators. Phys Rev Lett 1995, 75: 3190–3193. 10.1103/PhysRevLett.75.3190
 37.
Coombes S: Neuronal networks with gap junctions: a study of piecewise linear planar neuron models. SIAM J Appl Dyn Syst 2008, 7(3):1101–1129. 10.1137/070707579
Author information
Affiliations
Corresponding author
Additional information
Competing Interests
The authors confirm that they have no competing interests of which they are aware.
Authors’ Contributions
KCAW, KKL, RT and SC contributed equally. All authors read and approved the final manuscript.
Electronic supplementary material
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Wedgwood, K.C., Lin, K.K., Thul, R. et al. PhaseAmplitude Descriptions of Neural Oscillator Models. J. Math. Neurosc. 3, 2 (2013). https://doi.org/10.1186/2190856732
Received:
Accepted:
Published:
Keywords
 Phaseamplitude
 Oscillator
 Chaos
 Nonweak coupling