- Research
- Open access
- Published:
Phase-Amplitude Descriptions of Neural Oscillator Models
The Journal of Mathematical Neuroscience volume 3, Article number: 2 (2013)
Abstract
Phase oscillators are a common starting point for the reduced description of many single neuron models that exhibit a strongly attracting limit cycle. The framework for analysing such models in response to weak perturbations is now particularly well advanced, and has allowed for the development of a theory of weakly connected neural networks. However, the strong-attraction assumption may well not be the natural one for many neural oscillator models. For example, the popular conductance based Morris–Lecar model is known to respond to periodic pulsatile stimulation in a chaotic fashion that cannot be adequately described with a phase reduction. In this paper, we generalise the phase description that allows one to track the evolution of distance from the cycle as well as phase on cycle. We use a classical technique from the theory of ordinary differential equations that makes use of a moving coordinate system to analyse periodic orbits. The subsequent phase-amplitude description is shown to be very well suited to understanding the response of the oscillator to external stimuli (which are not necessarily weak). We consider a number of examples of neural oscillator models, ranging from planar through to high dimensional models, to illustrate the effectiveness of this approach in providing an improvement over the standard phase-reduction technique. As an explicit application of this phase-amplitude framework, we consider in some detail the response of a generic planar model where the strong-attraction assumption does not hold, and examine the response of the system to periodic pulsatile forcing. In addition, we explore how the presence of dynamical shear can lead to a chaotic response.
1 Introduction
One only has to look at the plethora of papers and books on the topic of phase oscillators in mathematical neuroscience to see the enormous impact that this tool from dynamical systems theory has had on the way we think about describing neurons and neural networks. Much of this work has its roots in the theory of ordinary differential equations (ODEs) and has been promoted for many years in the work of Winfree [1], Guckenheimer [2], Holmes [3], Kopell [4], Ermentrout [5] and Izhikevich [6] to name but a few. For a recent survey, we refer the reader to the book by Ermentrout and Terman [7]. At heart, the classic phase reduction approach recognises that if a high dimensional non-linear dynamical system (as a model of a neuron) exhibits a stable limit cycle attractor then trajectories near the cycle can be projected onto the cycle.
A natural phase variable is simply the time along the cycle (from some arbitrary origin) relative to the period of oscillation. The notion of phase can even be extended off the cycle using the concept of isochrons [1]. They provide global information about the ‘latent phase’, namely the phase that will be asymptotically returned to for a trajectory with initial data within the basin of attraction of an exponentially stable periodic orbit. More technically, isochrons can be viewed as the leaves of the invariant foliation of the stable manifold of a periodic orbit [8]. In rotating frame coordinates given by phase and the leaf of the isochron foliation, the system has a skew-product structure, i.e. the equation of the phase decouples. However, it is a major challenge to find the isochron foliation, and since it relies on the knowledge of the limit cycle it can only be found in special cases or numerically. There are now a number of complementary techniques that tackle this latter challenge, and in particular we refer the reader to work of Guillamon and Huguet [9] (using Lie symmetries) and Osinga and Moehlis [10] (exploiting numerical continuation). More recent work by Mauroy and Mezic [11] is especially appealing as it uses a simple forward integration algorithm, as illustrated in Fig. 1 for a Stuart–Landau oscillator. However, it is more common to side-step the need for constructing global isochrons by restricting attention to a small neighbourhood of the limit cycle, where dynamics can simply be recast in the reduced form , where θ is the phase around a cycle. This reduction to a phase description gives a nice simple dynamical system, albeit one that cannot describe evolution of trajectories in phase-space that are far away from the limit cycle. However, the phase reduction formalism is useful in quantifying how a system (on or close to a cycle) responds to weak forcing, via the construction of the infinitesimal phase response curve (iPRC). For a given high dimensional conductance based model this can be solved for numerically, though for some normal form descriptions closed form solutions are also known [12]. The iPRC at a point on cycle is equal to the gradient of the (isochronal) phase at that point. This approach forms the basis for constructing models of weakly interacting oscillators, where the external forcing is pictured as a function of the phase of a firing neuron. This has led to a great deal of work on phase-locking and central pattern generation in neural circuitry (see, for example [13]). Note that the work in [9] goes beyond the notion of iPRC and introduces infinitesimal phase response surfaces (allowing evaluation of phase advancement even when the stimulus is off cycle), and see also the work in [14] on non-infinitesimal PRCs.
The assumption that phase alone is enough to capture the essentials of neural response is one made more for mathematical convenience than being physiologically motivated. Indeed, for the popular type I Morris–Lecar (ML) firing model with standard parameters, direct numerical simulations with pulsatile forcing show responses that cannot be explained solely with a phase model [16]. The failure of a phase description is in itself no surprise and underlies why the community emphasises the use of the word weakly in the phrase “weakly connected neural networks”. Indeed, there are a number of potential pitfalls when applying phase reduction techniques to a system that is not in a weakly forced regime. The typical construction of the phase response curve uses only linear information about the isochrons and non-linear effects will come into play the further we move away from the limit cycle. This problem can be diminished by taking higher order approximations to the isochrons and using this information in the construction of a higher order PRC [17]. Even using perfect information about isochrons, the phase reduction still assumes persistence of the limit-cycle and instantaneous relaxation back to cycle. However, the presence of nearby invariant phase-space structures such as (unstable) fixed points and invariant manifolds may result in trajectories spending long periods of time away from the limit cycle. Moreover, strong forcing will necessarily take one away from the neighbourhood of a cycle where a phase description is expected to hold. Thus, developing a reduced description, which captures some notion of distance from cycle is a key component of any theory of forced limit cycle oscillators. The development of phase-amplitude models that better characterise the response of popular high dimensional single neuron models is precisely the topic of this paper. Given that it is a major challenge to construct an isochronal foliation we use non-isochronal phase-amplitude coordinates as a practical method for obtaining a more accurate description of neural systems. Recently, Medvedev [18] has used this approach to understand in more detail the synchronisation of linearly coupled stochastic limit cycle oscillators.
In Sect. 2, we consider a general coordinate transformation, which recasts the dynamics of a system in terms of phase-amplitude coordinates. This approach is directly taken from the classical theory for analysing periodic orbits of ODEs, originally considered for planar systems in [19], and for general systems in [20]. We advocate it here as one way to move beyond a purely phase-centric perspective. We illustrate the transformation by applying it to a range of popular neuron models. In Sect. 3, we consider how inputs to the neuron are transformed under these coordinate transformations and derive the evolution equations for the forced phase-amplitude system. This reduces to the standard phase description in the appropriate limit. Importantly, we show that the behaviour of the phase-amplitude system is much more able to capture that of the original single neuron model from which it is derived. Focusing on pulsatile forcing, we explore the conditions for neural oscillator models to exhibit shear induced chaos [16]. Finally in Sect. 4, we discuss the relevance of this work to developing a theory of network dynamics that can improve upon the standard weak coupling approach.
2 Phase-Amplitude Coordinates
Throughout this paper, we study the dynamics prescribed by the system , , with solutions that satisfy . We will assume that the system admits an attracting hyperbolic periodic orbit (namely one zero Floquet exponent and the others having negative real part), with period Δ, such that . A phase is naturally defined from . It has long been known in the dynamical systems community how to construct a coordinate system based on this notion of phase as well as a distance from cycle; see [20] for a discussion. In fact, Ermentrout and Kopell [21] made good use of this approach to derive the phase-interaction function for networks of weakly connected limit-cycle oscillators in the limit of infinitely fast attraction to cycle. However, this assumption is particularly extreme and unlikely to hold for a broad class of single neuron models. Thus, it is interesting to return to the full phase-amplitude description. In essence, the transformation to these coordinates involves setting up a moving orthonormal system around the limit cycle. One axis of this system is chosen to be in the direction of the tangent vector along the orbit, and the remaining are chosen to be orthogonal. We introduce the normalised tangent vector ξ as
The remaining coordinate axes are conveniently grouped together as the columns of an matrix ζ. In this case we can write an arbitrary point x as
Here, represents the Euclidean distance from the limit cycle. A caricature, in , of the coordinate system along an orbit segment is shown in Fig. 2. Through the use of the variable ρ, we can consider points away from the periodic orbit. Rather than being isochronal, lines of constant θ are simply straight lines that emanating from a point on the orbit in the direction of the normal. The technical details of specifying the orthonormal coordinates forming ζ are discussed in Appendix A.
Upon projecting the dynamics onto the moving orthonormal system, we obtain the dynamics of the transformed system:
where
and Df is the Jacobian of the vector field f evaluated along the periodic orbit u. The derivation of this system may be found in Appendix A. It is straightforward to show that as , and that . In the above, captures the shear present in the system, that is, whether the speed of θ increases or decreases dependent on the distance from cycle. A precise definition for shear is given in [22]. Additionally, describes the θ-dependent rate of attraction or repulsion from cycle.
It is pertinent to consider where this coordinate transformation breaks down, that is, where the determinant of the Jacobian of the transformation vanishes. This never vanishes on-cycle (where ), but may do so for some . This sets an upper bound on how far away from the limit cycle we can describe the system using these phase-amplitude coordinates. In Fig. 3, we plot the curve along which the transformation breaks down for the ML model. We observe that, for some values of θ, k is relatively smaller. The breakdown occurs where lines of constant θ cross, and thus the transformation ceases to be invertible, and these values of θ correspond to points along which the orbit has high curvature. We note that this issue is less problematical in higher dimensional models.
If we now consider the driven system,
where ε is not necessarily small, we may apply the same transformation as above to obtain the dynamics in coordinates, where and , as
with
and is the identity matrix. Here, h and B describe the effect in terms of θ and ρ that the perturbations have. Details of the derivation are given in Appendix A. For planar models, . To demonstrate the application of the above coordinate transformation, we now consider some popular single neuron models.
2.1 A 2D Conductance Based Model
The ML model was originally developed to describe the voltage dynamics of barnacle giant muscle fibre [23], and is now a popular modelling choice in computational neuroscience [7]. It is written as a pair of coupled non-linear ODEs of the form
Here, v is the membrane voltage, whilst w is a gating variable, describing the fraction of membrane ion channels that are open at any time. The first equation expresses Kirchoff’s current law across the cell membrane, with representing a stimulus in the form of an injected current. The detailed form of the model is completed in Appendix B.1. The ML model has a very rich bifurcation structure. Roughly speaking, by varying a constant current , one observes, in different parameter regions, dynamical regimes corresponding to sinks, limit cycles, and Hopf, saddle-node and homoclinic bifurcations, as well as combinations of the above. These scenarios are discussed in detail in [7] and [24].
As the ML model is planar, ρ is a scalar, as are the functions A and . This allows us to use the moving coordinate system to clearly visualise parts of phase space where trajectories are attracted towards the limit cycle, and parts in which they move away from it, as illustrated in Fig. 4. The functions and A, evaluated at are shown in Fig. 5. The evolution of θ is mostly constant, however we clearly observe portions of the trajectories where this is slowed, along which . In fact, this corresponds to where trajectories pass near to the saddle node, and the dynamics stall. This occurs around , and in Fig. 5 we see that both and are indeed close to 0. The reduced velocities of trajectories here highlights the importance of considering other phase space structures in forced systems, the details of which are missed in standard phase only models. Forcing in the presence of such structures may give rise to complex and even chaotic behaviours, as we shall see in Sect. 3.
In the next example, we show how the same ideas go across to higher dimensional models.
2.2 A 4D Conductance Based Model
The Connor–Stevens (CS) model [25] is built upon the Hodgkin–Huxley formalism and comprises a fast Na+ current, a delayed K+ current, a leak current and a transient K+ current, termed the A-current. The full CS model consists of 6 equations: the membrane potential, the original Hodgkin–Huxley gating variables, and an activating and inactivating gating variable for the A-current. Using the method of equivalent potentials [26], we may reduce the dimensionality of the system, to include only 4 variables. The reduced system is
where
The details of the reduced CS model are completed in Appendix B.2. The solutions to the reduced CS model under the coordinate transformation may be seen in Fig. 6, whilst, in Fig. 7, we show how this solution looks in the original coordinates. As for the ML model, θ evolves approximately constantly throughout, though this evolution is sped up close to . The trajectories of the vector ρ are more complicated, but note that there is regularity in the pattern exhibited, and that this occurs with approximately the same period as the underlying limit cycle. The damping of the amplitude of oscillations in ρ over successive periods represents the overall attraction to the limit cycle, whilst the regular behaviour of ρ represents the specific relaxation to cycle as shown in Fig. 7. Additional file 1 shows a movie of the trajectory in coordinates with the moving orthonormal system superimposed, as well as the solution in ρ for comparison.
3 Pulsatile Forcing of Phase-Amplitude Oscillators
We now consider a system with time-dependent forcing, given by (7) with
where δ is the Dirac δ-function. This describes T-periodic kicks to the voltage variable. Even such a simple forcing paradigm can give rise to rich dynamics [16]. For the periodically kicked ML model, shear forces can lead to chaotic dynamics as folds and horseshoes accumulate under the forcing. This means that the response of the neuron is extremely sensitive to the initial phase when the kicks occur. In terms of neural response, this means that the neuron is unreliable [27].
The behaviour of oscillators under such periodic pulsatile forcing is the subject of a number of studies; see, e.g. [27–30]. Of particular relevance here is [27], in which a qualitative reasoning of the mechanisms that bring about shear in such models is supplemented by direct numerical simulations to detect the presence of chaotic solutions. For the ML model in a parameter region close to the homoclinic regime, kicks can cause trajectories to pass near the saddle-node, and folds may occur as a result [16].
We here would like to compare full planar neural models to the simple model, studied in [27]:
This system exhibits dynamical shear, which under certain conditions, can lead to chaotic dynamics. The shear parameter σ dictates how much trajectories are ‘sped up’ or ‘slowed down’ dependent on their distance from the limit cycle, whilst λ is the rate of attraction back to the limit cycle, which is independent of θ. Supposing that the function P is smooth but non-constant, trajectories will be taken a variable distance from the cycle upon the application of the kick. When kicks are repeated, this geometric mechanism can lead to repeated stretching and folding of phase space. It is clear that the larger σ is in (15), the more shear is present, and the more likely we are to observe the folding effect. In a similar way, smaller values of λ mean that the shear has longer to act upon trajectories and again result in a greater likelihood of chaos. Finally, to observe chaotic response, we must ensure that the shear forces have sufficient time to act, meaning that T, the time between kicks must not be too small.
This stretching and folding action can clearly lead to the formation of Smale horseshoes, which are well known to lead to a type of chaotic behaviour. However, horseshoes may co-exist with sinks, meaning the resulting chaotic dynamics would be transient. Wang and Young proved that under appropriate conditions, there is a set of T of positive Lebesgue measure for which the system experiences a stronger form of sustained, chaotic behaviour, characterised by the existence of a positive Lyapunov exponent for almost all initial conditions and the existence of a ‘strange attractor’; see, e.g. [28–30].
By comparing with the phase-amplitude dynamics described by Eqs. (8)–(9), we see that the model of shear considered in (15) is a proxy for a more general system, with , and , and .
To gain a deeper insight into the phenomenon of shear induced chaos, it is pertinent to study the isochrons of the limit cycle for the linear model (15), where the isochrons are simply lines with slope . In Fig. 8, we depict the isochrons and stretch and fold action of shear. The bold thin grey line at is kicked, at , to the bold solid curve, where , as studied in [16] and this curve is allowed to evolve under the dynamics with no further kicks through the dashed curve at and ultimately to the dotted curve at , which may be considered as evolutions of the solid curve by integer multiples of the natural period of the oscillator. Every point of the dotted curve traverses the isochron it is on at until . The green marker shows an example of one such point evolving along its associated isochron. The folding effect of this is clear in the figure, and further illustrated in the video in Additional file 2.
This simple model with a harmonic form for provides insight into how strange attractors can be formed. Kicks along the isochrons or ones that map isochrons to one another will not produce strange attractors, but merely phase-shifts. What causes the stretching and folding is the variation in how far points are moved as measured in the direction transverse to the isochrons. For the linear system (15) variation in this sense is generated by any non-constant ; the larger the ratio , the larger the variation (see [16] for a recent discussion).
The formation of chaos in the ML model is shown in Fig. 9. Here, we plot the response to periodic pulsatile forcing, given by (14), in the coordinate system. This clearly illustrates a folding of phase space around the limit-cycle, and is further portrayed in the video in Additional file 3. We now show how this can be understood using phase-amplitude coordinates.
Compared to the phenomenological system (15), models written in phase-amplitude coordinates as (8)–(9) have two main differences. The intrinsic dynamics (without kicks) are non-linear, and the kick terms appear in both equations for and (not just ). Simulations of (8)–(9) for both the FHN and ML models, with , show that the replacement of by σρ, dropping (which is quadratic in ρ), and setting does not lead to any significant qualitative change in behaviour (for a wide range of ). We therefore conclude that, at least when the kick amplitude ε is not too large, it is more important to focus on the form of the forcing in the phase-amplitude coordinates. In what follows, we are interested in discovering the effects of different functional forms of the forcing term multiplying the δ-function, keeping other factors fixed. As examples, we choose those forcing terms given by transforming the FHN and the ML models into phase-amplitude coordinates. To find these functions, we first find the attracting limit cycle solution in the ML model (11) and FHN model (52) using a periodic boundary value problem solver and set up the orthonormal coordinate system around this limit cycle. Once the coordinate system is established, we evaluate the functions and (that appear in Eqs. (8) and (9)). For planar systems, we have simply that . Using the forcing term (14), we are only considering perturbations to the voltage component of our system and thus only the first component of h, and the first column of B will make a non-trivial contribution to the dynamics. We define as the first component of h and as the first component of ζ. We wish to force each system at the same ratio of the natural frequency of the underlying periodic orbit. To ease comparison between the system with the ML forcing terms and the FHN forcing terms, we rescale so that in what follows. Implementing the above choices leads to
It is important to emphasise that are determined by the underlying single neuron model (unlike in the toy model (15)). As emphasised in [31], one must take care in the treatment of the state dependent ‘jumps’ caused by the δ-functions in (16) to accommodate the discontinuous nature of θ and ρ at the time of the kick. To solve (16), we approximate with a normalised square pulse of the form
where . This means that for , , the dynamics are governed by the linear system . This can be integrated to obtain the state of the system just before the arrival of the next kick, , in the form
In the interval and using (17) we now need to solve the system of non-linear ODEs
Rescaling time as , with , and writing the solution as a regular perturbation expansion in powers of τ as , we find after collecting terms of leading order in τ that the pair is governed by
with initial conditions . The solution (obtained numerically) can then be taken as initial data in (18)–(19) to form the stroboscopic map
Note that this has been constructed using a matched asymptotic expansion, using (17), and is valid in the limit . For weak forcing, where , vary slowly through the kick and can be approximated by their values at so that to
Although this explicit map is convenient for numerical simulations, we prefer to work with the full stroboscopic map (22)–(23), which is particularly useful for comparing and contrasting the behaviour of different planar single neuron models with arbitrary kick strength. As an indication of the presence of chaos in the dynamics resulting from this system, we evaluate the largest Lyapunov exponent of the map (22)–(23) by numerically evolving a tangent vector and computing its rate of growth (or contraction); see e.g. [32] for details.
In Fig. 10, we compare the functions for both the FHN and the ML models. We note that for the FHN model is near 0 for a large set of θ, whilst the same is true for for the ML model. This means that kicks in the FHN model will tend to primarily cause phase shifts, whilst the same kicks in the ML model will primarily cause shifts in amplitude.
We plot in the top row of Fig. 11 the pair , for (24)–(25) for the FHN and ML models. For the FHN model, we find that the system has Lyapunov exponent of . For the ML model the Lyapunov exponent is . This implies that differences in the functional forms of can help to explain the generation of chaos.
Now that we know the relative contribution of kicks in v to kicks in , it is also useful to know where kicks actually occur in terms of θ as this will determine the contribution of a train of kicks to the dynamics. In Figs. 11c and d, we plot the distribution of kicks as a function of θ. For the ML model, we observe that the kicks are distributed over all phases, while for FHN model there is a grouping of kicks around the region where is roughly zero. This means that kicks will not be felt as much in the ρ variable, and so trajectories here do not get kicked far from cycle. This helps explain why it is more difficult to generate chaotic responses in the FHN model.
After transients, we observe a phase-locked state for the FHN model. For a phase-locked state, small perturbations will ultimately decay as the perturbed trajectories also end up at the phase-locked state after some transient behaviour. This results in a negative largest Lyapunov exponent of −0.0515. We note the sharply peaked distribution of kick phases, which is to be expected for discrete-time systems possessing a negative largest Lyapunov exponent, since such systems tend to have sinks in this case. The phase-locked state here occurs where is small, suggesting that trajectories stay close to the limit cycle. Since kicks do not move trajectories away from cycle, there is no possibility of folding, and hence no chaotic behaviour. For the ML model, we observe chaotic dynamics around a strange attractor, where small perturbations can grow, leading to a positive largest Lyapunov exponent of 0.6738. This time, the kicks are distributed fairly uniformly across θ, and so, some kicks will take trajectories away from the limit cycle, thus leading to shear-induced folding and chaotic behaviour.
4 Discussion
In this paper, we have used the notion of a moving orthonormal coordinate system around a limit cycle to study dynamics in a neighbourhood around it. This phase-amplitude coordinate system can be constructed for any given ODE system supporting a limit cycle. A clear advantage of the transformed description over the original one is that it allows us to gain insight into the effect of time dependent perturbations, using the notion of shear, as we have illustrated by performing case studies of popular neural models, in two and higher dimensions. Whilst this coordinate transformation does not result in any reduction in dimensionality in the system, as is the case with classical phase reduction techniques, it opens up avenues for moving away from the weak coupling limit, where . Importantly, it emphasises the role of the two functions and that provide more information about inputs to the system than the iPRC alone. It has been demonstrated that moderately small perturbations can exert remarkable influence on dynamics in the presence of other invariant structures [16], which cannot be captured by a phase only description. In addition, small perturbations can accumulate if the timescale of the perturbation is shorter than the timescale of attraction back to the limit cycle. This should be given particular consideration in the analysis of neural systems, where oscillators may be connected to thousands of other units, so that small inputs can quickly accumulate.
One natural extension of this work is to move beyond the theory of weakly coupled oscillators to develop a framework for describing neural systems as networks of phase-amplitude units. This has previously been considered for the case of weakly coupled weakly dissipative networks of non-linear planar oscillators (modelled by small dissipative perturbations of a Hamiltonian oscillator) [33–35]. It would be interesting to develop these ideas and obtain network descriptions of the following type:
with an appropriate identification of the interaction functions in terms of the biological interaction between neurons and the single neuron functions . Such phase-amplitude network models are ideally suited to describing the behaviour of the mean-field signal in networks of strongly gap junction coupled ML neurons [36, 37], which is known to vary because individual neurons make transitions between cycles of different amplitudes. Moreover, in the same network weakly coupled oscillator theory fails to explain how the synchronous state can stabilise with increasing coupling strength (predicting that it is always unstable), as observed numerically. All of the above are topics of ongoing research and will be reported upon elsewhere.
Appendix A: Derivation of the Transformed Dynamical System
Starting from
we make the transformation , giving
We proceed by projecting (29) onto , using (1). The left-hand side of (29) now reads:
where denotes the transpose of ξ and the right-hand side of (29) becomes
Thus,
where
and
Upon projecting both sides of (29) onto , the left-hand side reads
whilst the right-hand side becomes
since and where Df denotes the Jacobian of f. Putting together the previous two equations yields
where
It may be easily seen that as and that and . Overall, combining (32) and (37) we arrive at the transformed system:
In order to evaluate the functions , , and A for models with dimension larger than two, we need to calculate . Defining by , the direction angles of , we have that
where the index i denotes the column entry of ζ and denotes the dot product between vectors x and y. Defining
and
where j denotes the row index, we have
By the quotient rule for vectors we find that
and that
Overall, we have that
Appendix B: Gallery of Models
B.1 Morris–Lecar
The ML equations describe the interaction of membrane voltage with just two ionic currents: and . Membrane ion channels are selective for specific types of ions; their dynamics are modelled here by the gating variable w and the auxiliary functions , , and . The latter have the form
The function models the action of fast voltage-gated calcium ion channels; is the reversal (bias) potential for the calcium current and the corresponding conductance. The functions and similarly describe the dynamics of slower-acting potassium channels, with its own reversal potential and conductance . The constants and characterise the leakage current that is present even when the neuron is in a quiescent state. Parameter values are , , , , , , , , , , , , and .
B.2 Reduced Connor–Stevens Model
For the reduced CS model, we start with the full Hodgkin–Huxley model, with m, n, h as gating variables and use the method of equivalent potentials as treated in [26], giving rise to the following form for the function g:
where and are evaluated at and . For the gating variables , we have
Parameter values are , , , , , , , , , and .
B.3 FitzHugh–Nagumo Model
The FHN model is a phenomenological model of spike generation, comprising of 2 variables. The first represents the membrane potential and includes a cubic non-linearity, whilst the second variable is a gating variable, similar to w in the ML model, which may be thought of as a recovery variable. The system is
where we use the following parameter values: , , , and .
Electronic Supplementary Material
Abbreviations
- ML:
-
Morris–Lecar
- FHN:
-
FitzHugh–Nagumo
- CS:
-
Connor–Stevens
- LE:
-
Lyapunov exponent
References
Winfree A: The Geometry of Biological Time. 2nd edition. Springer, Berlin; 2001.
Guckenheimer J: Isochrons and phaseless sets. J Math Biol 1975, 1: 259–273. 10.1007/BF01273747
Cohen AH, Rand RH, Holmes PJ: Systems of coupled oscillators as models of central pattern generators. In Neural Control of Rhythmic Movements in Vertebrates. Wiley, New York; 1988.
Kopell N, Ermentrout GB: Symmetry and phaselocking in chains of weakly coupled oscillators. Commun Pure Appl Math 1986, 39: 623–660. 10.1002/cpa.3160390504
Ermentrout GB: n:m phase-locking of weakly coupled oscillators. J Math Biol 1981, 12: 327–342. 10.1007/BF00276920
Izhikevich EM: Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, Cambridge; 2007.
Ermentrout GB, Terman DH: Mathematical Foundations of Neuroscience. Springer, Berlin; 2010.
Josic K, Shea-Brown ET, Moehlis J: Isochron. Scholarpedia 2006., 1(8): Article ID 1361 Article ID 1361
Guillamon A, Huguet G: A computational and geometric approach to phase resetting curves and surfaces. SIAM J Appl Dyn Syst 2009, 8(3):1005–1042. 10.1137/080737666
Osinga HM, Moehlis J: A continuation method for computing global isochrons. SIAM J Appl Dyn Syst 2010, 9(4):1201–1228. 10.1137/090777244
Mauroy A, Mezic I: On the use of Fourier averages to compute the global isochrons of (quasi)periodic dynamics. Chaos 2012., 22(3): Article ID 033112 Article ID 033112
Brown E, Moehlis J, Holmes P: On the phase reduction and response dynamics of neural oscillator populations. Neural Comput 2004, 16: 673–715. 10.1162/089976604322860668
Hoppensteadt FC, Izhikevich EM: Weakly Connected Neural Networks. Springer, Berlin; 1997.
Achuthan S, Canavier CC: Phase-resetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. J Neurosci 2009, 29(16):5218–5233. 10.1523/JNEUROSCI.0426-09.2009
Yoshimura K: Phase reduction of stochastic limit-cycle oscillators. In Reviews of Nonlinear Dynamics and Complexity. Volume 3. Wiley, New York; 2010:59–90.
Lin KK, Wedgwood KCA, Coombes S, Young LS: Limitations of perturbative techniques in the analysis of rhythms and oscillations. J Math Biol 2013, 66: 139–161. 10.1007/s00285-012-0506-0
Demir A, Suvak O: Quadratic approximations for the isochrons of oscillators: a general theory, advanced numerical methods and accurate phase computations. IEEE Trans Comput-Aided Des Integr Circuits Syst 2010, 29: 1215–1228.
Medvedev GS: Synchronization of coupled stochastic limit cycle oscillators. Phys Lett A 2010, 374: 1712–1720. 10.1016/j.physleta.2010.02.031
Diliberto SP: On systems of ordinary differential equations. Annals of Mathematical Studies 20. In Contributions to the Theory of Nonlinear Oscillations. Princeton University Press, Princeton; 1950:1–38.
Hale JK: Ordinary Differential Equations. Wiley, New York; 1969.
Ermentrout GB, Kopell N: Oscillator death in systems of coupled neural oscillators. SIAM J Appl Math 1990, 50: 125–146. 10.1137/0150009
Ott W, Stenlund M: From limit cycles to strange attractors. Commun Math Phys 2010, 296: 215–249. 10.1007/s00220-010-0994-y
Morris C, Lecar H: Voltage oscillations in the barnacle giant muscle fiber. Biophys J 1981, 35: 193–213. 10.1016/S0006-3495(81)84782-0
Rinzel J, Ermentrout GB: Analysis of neural excitability and oscillations. In Methods in Neuronal Modeling. 1st edition. MIT Press, Cambridge; 1989:135–169.
Connor JA, Stevens CF: Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma. J Physiol 1971, 213: 31–53.
Kepler TB, Abbott LF, Marder E: Reduction of conductance-based neuron models. Biol Cybern 1992, 66: 381–387. 10.1007/BF00197717
Lin KK, Young LS: Shear-induced chaos. Nonlinearity 2008, 21(5):899–922. 10.1088/0951-7715/21/5/002
Wang Q, Young LS: Strange attractors with one direction of instability. Commun Math Phys 2001, 218: 1–97. 10.1007/s002200100379
Wang Q, Young LS: From invariant curves to strange attractors. Commun Math Phys 2002, 225: 275–304. 10.1007/s002200100582
Wang Q: Strange attractors in periodically-kicked limit cycles and Hopf bifurcations. Commun Math Phys 2003, 240: 509–529.
Catllá AJ, Schaeffer DG, Witelski TP, Monson EE, Lin AL: On spiking models for synaptic activity and impulsive differential equations. SIAM Rev 2008, 50: 553–569. 10.1137/060667980
Christiansen F, Rugh F: Computing Lyapunov spectra with continuous Gram–Schmidt orthonormalization. Nonlinearity 1997, 10: 1063–1072. 10.1088/0951-7715/10/5/004
Ashwin P: Weak coupling of strongly nonlinear, weakly dissipative identical oscillators. Dyn Syst 1989, 10(3):2471–2474.
Ashwin P, Dangelmayr G: Isochronicity-induced bifurcations in systems of weakly dissipative coupled oscillators. Dyn Stab Syst 2000, 15(3):263–286. 10.1080/713603745
Ashwin P, Dangelmayr G: Reduced dynamics and symmetric solutions for globally coupled weakly dissipative oscillators. Dyn Syst 2005, 20(3):333–367. 10.1080/14689360500151813
Han SK, Kurrer C, Kuramoto Y: Dephasing and bursting in coupled neural oscillators. Phys Rev Lett 1995, 75: 3190–3193. 10.1103/PhysRevLett.75.3190
Coombes S: Neuronal networks with gap junctions: a study of piecewise linear planar neuron models. SIAM J Appl Dyn Syst 2008, 7(3):1101–1129. 10.1137/070707579
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing Interests
The authors confirm that they have no competing interests of which they are aware.
Authors’ Contributions
KCAW, KKL, RT and SC contributed equally. All authors read and approved the final manuscript.
Electronic supplementary material
13408_2012_27_MOESM1_ESM.mov
Additional file 1: A movie showing the moving orthonormal system for the Connor–Stevens model. The top panel shows a projection of the moving orthonormal system from the full space onto . Around the point , where θ is the phase, we establish an orthonormal basis in a subspace of . As θ evolves, so does this coordinate system, as shown by the moving black lines, which represent the moving orthonormal basis. In this movie, we choose some initial conditions off cycle, shown by the blue orbit. The ρ coordinates along the moving coordinate system are shown in the bottom panel (MOV 649 kB) (MOV 648 KB)
13408_2012_27_MOESM2_ESM.mov
Additional file 2: A movie showing the stretch-and-fold action brought about by shear forces. In this movie, we show both the shear forces and the rate of attraction back to cycle are linear. The limit cycle is first unravelled so that it may be represented by a straight line. We choose as our forcing function and apply it instantaneously at . We then allow the resulting image of the kicked orbit to evolve under the flow generated by system (15) between kicks until (in arbitrary units). As the curve relaxes back to the cycle, we see that the shear forcing causes a fold in the curve to develop. The accumulation of such folds over successive forcing periods can ultimately give rise to chaotic dynamics, which would not be observed in the corresponding phase-only model. The thinner black lines represent the isochrons of the system which, in this simple example, are straight lines with slope . Since the isochrons of the system are unchanged between kicks, we observe that phase points simply traverse the isochron they are kicked to as they relax back to cycle (MOV 769 kB) (MOV 769 KB)
13408_2012_27_MOESM3_ESM.mov
Additional file 3: A movie showing the accumulation of folds in the kicked ML model. The thin black line represents the underlying periodic orbit of the system , with f taken for the ML model. Every T units of time, we apply a kick taking , where , whilst leaving w unchanged, to all phase-points. The movie then shows the evolution of all of these phase-points. Please note that this movie does not show trajectories of the system, but the image of points starting on the limit cycle, under the action of the kick composed with the flow generated by . This movie show the action of 4 such kicks. We observe that trajectories spend a long time near the saddle node to the bottom left of the figure, so that these trajectories travel slower than those close to the limit cycle. As we apply more kicks, we see the folds developing and accumulating (MOV 2118 kB) (MOV 2 MB)
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Wedgwood, K.C., Lin, K.K., Thul, R. et al. Phase-Amplitude Descriptions of Neural Oscillator Models. J. Math. Neurosc. 3, 2 (2013). https://doi.org/10.1186/2190-8567-3-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/2190-8567-3-2