Phase-Amplitude Descriptions of Neural Oscillator Models

Phase oscillators are a common starting point for the reduced description of many single neuron models that exhibit a strongly attracting limit cycle. The framework for analysing such models in response to weak perturbations is now particularly well advanced, and has allowed for the development of a theory of weakly connected neural networks. However, the strong-attraction assumption may well not be the natural one for many neural oscillator models. For example, the popular conductance based Morris–Lecar model is known to respond to periodic pulsatile stimulation in a chaotic fashion that cannot be adequately described with a phase reduction. In this paper, we generalise the phase description that allows one to track the evolution of distance from the cycle as well as phase on cycle. We use a classical technique from the theory of ordinary differential equations that makes use of a moving coordinate system to analyse periodic orbits. The subsequent phase-amplitude description is shown to be very well suited to understanding the response of the oscillator to external stimuli (which are not necessarily weak). We consider a number of examples of neural oscillator models, ranging from planar through to high dimensional models, to illustrate the effectiveness of this approach in providing an improvement over the standard phase-reduction technique. As an explicit application of this phase-amplitude framework, we consider in some detail the response of a generic planar model where the strong-attraction assumption does not hold, and examine the response of the system to periodic pulsatile forcing. In addition, we explore how the presence of dynamical shear can lead to a chaotic response.


Introduction
One only has to look at the plethora of papers and books on the topic of phase oscillators in mathematical neuroscience to see the enormous impact that this tool from dynamical systems theory has had on the way we think about describing neurons and neural networks. Much of this work has its roots in the theory of ordinary differential equations (ODEs) and has been promoted for many years in the work of Winfree [1], Guckenheimer [2], Holmes [3], Kopell [4], Ermentrout [5] and Izhikevich [6] to name but a few. For a recent survey, we refer the reader to the book by Ermentrout and Terman [7]. At heart, the classic phase reduction approach recognises that if a high dimensional non-linear dynamical system (as a model of a neuron) exhibits a stable limit cycle attractor then trajectories near the cycle can be projected onto the cycle.
A natural phase variable is simply the time along the cycle (from some arbitrary origin) relative to the period of oscillation. The notion of phase can even be extended off the cycle using the concept of isochrons [1]. They provide global information about the 'latent phase', namely the phase that will be asymptotically returned to for a trajectory with initial data within the basin of attraction of an exponentially stable periodic orbit. More technically, isochrons can be viewed as the leaves of the invariant foliation of the stable manifold of a periodic orbit [8]. In rotating frame coordinates given by phase and the leaf of the isochron foliation, the system has a skew-product structure, i.e. the equation of the phase decouples. However, it is a major challenge to find the isochron foliation, and since it relies on the knowledge of the limit cycle it can only be found in special cases or numerically. There are now a number of complementary techniques that tackle this latter challenge, and in particular we refer the reader to work of Guillamon and Huguet [9] (using Lie symmetries) and Osinga and Moehlis [10] (exploiting numerical continuation). More recent work by Mauroy and Mezic [11] is especially appealing as it uses a simple forward integration algorithm, as illustrated in Fig. 1 for a Stuart-Landau oscillator. However, it is more common to side-step the need for constructing global isochrons by restricting attention to a small neighbourhood of the limit cycle, where dynamics can simply be recast in the reduced formθ = 1, where θ is the phase around a cycle. This reduction to a phase description gives a nice simple dynamical system, albeit one that cannot describe evolution of trajectories in phase-space that are far away from the limit cycle. However, Fig. 1 Isochrons of a Stuart-Landau oscillator model: x = λx/2 − (λc/2 + ω)y − λ(x 2 + y 2 )(x − cy)/2, y = (λc/2 + ω)x + λy/2 − λ(x 2 + y 2 )(cx + y)/2. The black curve represents the periodic orbit of the system, which is simply the unit circle for this model. The blue curves are the isochrons obtained using the (forward) approach of Mauroy and Mezic [11]. The green dots are analytically obtained isochronal points [15]. Parameter values are λ = 2.0, c = 1.0, and ω = 1.0 the phase reduction formalism is useful in quantifying how a system (on or close to a cycle) responds to weak forcing, via the construction of the infinitesimal phase response curve (iPRC). For a given high dimensional conductance based model this can be solved for numerically, though for some normal form descriptions closed form solutions are also known [12]. The iPRC at a point on cycle is equal to the gradient of the (isochronal) phase at that point. This approach forms the basis for constructing models of weakly interacting oscillators, where the external forcing is pictured as a function of the phase of a firing neuron. This has led to a great deal of work on phase-locking and central pattern generation in neural circuitry (see, for example [13]). Note that the work in [9] goes beyond the notion of iPRC and introduces infinitesimal phase response surfaces (allowing evaluation of phase advancement even when the stimulus is off cycle), and see also the work in [14] on non-infinitesimal PRCs.
The assumption that phase alone is enough to capture the essentials of neural response is one made more for mathematical convenience than being physiologically motivated. Indeed, for the popular type I Morris-Lecar (ML) firing model with standard parameters, direct numerical simulations with pulsatile forcing show responses that cannot be explained solely with a phase model [16]. The failure of a phase description is in itself no surprise and underlies why the community emphasises the use of the word weakly in the phrase "weakly connected neural networks". Indeed, there are a number of potential pitfalls when applying phase reduction techniques to a system that is not in a weakly forced regime. The typical construction of the phase response curve uses only linear information about the isochrons and non-linear effects will come into play the further we move away from the limit cycle. This problem can be diminished by taking higher order approximations to the isochrons and using this information in the construction of a higher order PRC [17]. Even using perfect information about isochrons, the phase reduction still assumes persistence of the limit-cycle and instantaneous relaxation back to cycle. However, the presence of nearby invariant phase-space structures such as (unstable) fixed points and invariant manifolds may result in trajectories spending long periods of time away from the limit cycle. Moreover, strong forcing will necessarily take one away from the neighbourhood of a cycle where a phase description is expected to hold. Thus, developing a reduced description, which captures some notion of distance from cycle is a key component of any theory of forced limit cycle oscillators. The development of phase-amplitude models that better characterise the response of popular high dimensional single neuron models is precisely the topic of this paper. Given that it is a major challenge to construct an isochronal foliation we use non-isochronal phaseamplitude coordinates as a practical method for obtaining a more accurate description of neural systems. Recently, Medvedev [18] has used this approach to understand in more detail the synchronisation of linearly coupled stochastic limit cycle oscillators.
In Sect. 2, we consider a general coordinate transformation, which recasts the dynamics of a system in terms of phase-amplitude coordinates. This approach is directly taken from the classical theory for analysing periodic orbits of ODEs, originally considered for planar systems in [19], and for general systems in [20]. We advocate it here as one way to move beyond a purely phase-centric perspective. We illustrate the transformation by applying it to a range of popular neuron models. In Sect. 3, we consider how inputs to the neuron are transformed under these coordinate transformations and derive the evolution equations for the forced phase-amplitude system. This reduces to the standard phase description in the appropriate limit. Importantly, we show that the behaviour of the phase-amplitude system is much more able to capture that of the original single neuron model from which it is derived. Focusing on pulsatile forcing, we explore the conditions for neural oscillator models to exhibit shear induced chaos [16]. Finally in Sect. 4, we discuss the relevance of this work to developing a theory of network dynamics that can improve upon the standard weak coupling approach.

Phase-Amplitude Coordinates
Throughout this paper, we study the dynamics prescribed by the systemẋ = f (x), x ∈ R n , with solutions x = x(t) that satisfy x(0) = x 0 ∈ R n . We will assume that the system admits an attracting hyperbolic periodic orbit (namely one zero Floquet exponent and the others having negative real part), with period , such that u(t) = u(t + ). A phase θ ∈ [0, ) is naturally defined from θ(u(t)) = t mod . It has long been known in the dynamical systems community how to construct a coordinate system based on this notion of phase as well as a distance from cycle; see [20] for a discussion. In fact, Ermentrout and Kopell [21] made good use of this approach to derive the phase-interaction function for networks of weakly connected limit-cycle oscillators in the limit of infinitely fast attraction to cycle. However, this assumption is particularly extreme and unlikely to hold for a broad class of single neuron models. Thus, it is interesting to return to the full phase-amplitude description. In essence, the transformation to these coordinates involves setting up a moving orthonormal system around the limit cycle. One axis of this system is chosen to be in the direction of the tangent vector along the orbit, and the remaining are chosen to be orthogonal. We introduce the normalised tangent vector ξ as (1) Fig. 2 Demonstration of the moving orthonormal coordinate system along an orbit segment. As t evolves from t 0 to t 1 , the coordinates vary smoothly. In this planar example, ζ always points to the outside of the orbit The remaining coordinate axes are conveniently grouped together as the columns of an n × (n − 1) matrix ζ . In this case we can write an arbitrary point x as Here, |ρ| represents the Euclidean distance from the limit cycle. A caricature, in R 2 , of the coordinate system along an orbit segment is shown in Fig. 2. Through the use of the variable ρ, we can consider points away from the periodic orbit. Rather than being isochronal, lines of constant θ are simply straight lines that emanating from a point on the orbit in the direction of the normal. The technical details of specifying the orthonormal coordinates forming ζ are discussed in Appendix A. Upon projecting the dynamics onto the moving orthonormal system, we obtain the dynamics of the transformed system: where and Df is the Jacobian of the vector field f evaluated along the periodic orbit u.
The derivation of this system may be found in Appendix A. It is straightforward to show that f 1 (θ, ρ) → 0 as |ρ| → 0, f 2 (θ, 0) = 0 and that ∂f 2 (θ, 0)/∂ρ = 0. In the above, f 1 captures the shear present in the system, that is, whether the speed of θ increases or decreases dependent on the distance from cycle. A precise definition for shear is given in [22]. Additionally, A(θ ) describes the θ -dependent rate of attraction or repulsion from cycle. It is pertinent to consider where this coordinate transformation breaks down, that is, where the determinant of the Jacobian of the transformation K = det[∂x/∂θ ∂x/∂ρ] vanishes. This never vanishes on-cycle (where ρ = 0), but may do so for some |ρ| = k > 0. This sets an upper bound on how far away from the limit cycle we can describe the system using these phase-amplitude coordinates. In Fig. 3, we plot the curve along which the transformation breaks down for the ML model. We observe that, for some values of θ , k is relatively smaller. The breakdown occurs where lines of constant θ cross, and thus the transformation ceases to be invertible, and these values of θ correspond to points along which the orbit has high curvature. We note that this issue is less problematical in higher dimensional models.
If we now consider the driven system, where ε is not necessarily small, we may apply the same transformation as above to obtain the dynamics in (θ, ρ) coordinates, where θ ∈ [0, ) and ρ ∈ R n−1 , aṡ with and I n is the n × n identity matrix. Here, h and B describe the effect in terms of θ and ρ that the perturbations have. Details of the derivation are given in Appendix A. For planar models, B = I 2 . To demonstrate the application of the above coordinate transformation, we now consider some popular single neuron models.

A 2D Conductance Based Model
The ML model was originally developed to describe the voltage dynamics of barnacle giant muscle fibre [23], and is now a popular modelling choice in computational neuroscience [7]. It is written as a pair of coupled non-linear ODEs of the form Here, v is the membrane voltage, whilst w is a gating variable, describing the fraction of membrane ion channels that are open at any time. The first equation expresses Kirchoff's current law across the cell membrane, with I (t) representing a stimulus in the form of an injected current. The detailed form of the model is completed in Appendix B.1. The ML model has a very rich bifurcation structure. Roughly speaking, by varying a constant current I (t) ≡ I 0 , one observes, in different parameter regions, dynamical regimes corresponding to sinks, limit cycles, and Hopf, saddle-node and homoclinic bifurcations, as well as combinations of the above. These scenarios are discussed in detail in [7] and [24]. As the ML model is planar, ρ is a scalar, as are the functions A and f 1,2 . This allows us to use the moving coordinate system to clearly visualise parts of phase space where trajectories are attracted towards the limit cycle, and parts in which they move away from it, as illustrated in Fig. 4. The functions f 1,2 and A, evaluated at ρ = −0.1 are shown in Fig. 5. The evolution of θ is mostly constant, however we clearly observe portions of the trajectories where this is slowed, along whichρ ≈ 0. In fact, this corresponds to where trajectories pass near to the saddle node, and the dynamics stall. This occurs around θ = 17, and in Fig. 5 we see that both A(θ ) and f 1 (θ, ρ) are indeed close to 0. The reduced velocities of trajectories here highlights the importance of considering other phase space structures in forced systems, the details of which are missed in standard phase only models. Forcing in the presence of such structures may give rise to complex and even chaotic behaviours, as we shall see in Sect. 3.
In the next example, we show how the same ideas go across to higher dimensional models.

A 4D Conductance Based Model
The Connor-Stevens (CS) model [25] is built upon the Hodgkin-Huxley formalism and comprises a fast Na + current, a delayed K + current, a leak current and a transient K + current, termed the A-current. The full CS model consists of 6 equations: the membrane potential, the original Hodgkin-Huxley gating variables, and an activating and inactivating gating variable for the A-current. Using the method of equivalent We clearly see the difference in the order of magnitude between f 1 and f 2 for small ρ. Note that, although the average of A over one period is negative, it is positive for a non-trivial interval of θ . This corresponds to movement close to the stable manifold of the saddle node. Parameter values as in Appendix B.1 potentials [26], we may reduce the dimensionality of the system, to include only 4 variables. The reduced system is where The details of the reduced CS model are completed in Appendix B.2. The solutions to the reduced CS model under the coordinate transformation may be seen in Fig. 6, whilst, in Fig. 7, we show how this solution looks in the original coordinates. As for the ML model, θ evolves approximately constantly throughout, though this evolution is sped up close to θ = . The trajectories of the vector ρ are more complicated, but note that there is regularity in the pattern exhibited, and that this occurs with approximately the same period as the underlying limit cycle. The damping of the amplitude of oscillations in ρ over successive periods represents the overall attraction to the limit cycle, whilst the regular behaviour of ρ represents the specific relaxation to cycle as shown in Fig. 7. Additional file 1 shows a movie of the trajectory in (v, u, b) coordinates with the moving orthonormal system superimposed, as well as the solution in ρ for comparison.

Pulsatile Forcing of Phase-Amplitude Oscillators
We now consider a system with time-dependent forcing, given by (7) with where δ is the Dirac δ-function. This describes T -periodic kicks to the voltage variable. Even such a simple forcing paradigm can give rise to rich dynamics [16]. For the periodically kicked ML model, shear forces can lead to chaotic dynamics as folds and horseshoes accumulate under the forcing. This means that the response of the neuron is extremely sensitive to the initial phase when the kicks occur. In terms of neural response, this means that the neuron is unreliable [27]. The behaviour of oscillators under such periodic pulsatile forcing is the subject of a number of studies; see, e.g. [27][28][29][30]. Of particular relevance here is [27], in which a qualitative reasoning of the mechanisms that bring about shear in such models is  Fig. 6 supplemented by direct numerical simulations to detect the presence of chaotic solutions. For the ML model in a parameter region close to the homoclinic regime, kicks can cause trajectories to pass near the saddle-node, and folds may occur as a result [16].
We here would like to compare full planar neural models to the simple model, studied in [27]:θ This system exhibits dynamical shear, which under certain conditions, can lead to chaotic dynamics. The shear parameter σ dictates how much trajectories are 'sped up' or 'slowed down' dependent on their distance from the limit cycle, whilst λ is the rate of attraction back to the limit cycle, which is independent of θ . Supposing that the function P is smooth but non-constant, trajectories will be taken a variable distance from the cycle upon the application of the kick. When kicks are repeated, this geometric mechanism can lead to repeated stretching and folding of phase space. It is clear that the larger σ is in (15), the more shear is present, and the more likely we are to observe the folding effect. In a similar way, smaller values of λ mean that the shear has longer to act upon trajectories and again result in a greater likelihood of chaos. Finally, to observe chaotic response, we must ensure that the shear forces have sufficient time to act, meaning that T , the time between kicks must not be too small.  (15), are simply straight lines with slope −λ/σ . The thin grey line at ρ = 0 represents the limit cycle, which is kicked, at t = t 0 by P (θ) = sin(θ) with strength ε = 1 to the solid curve. After this, the orbits are allowed to evolve under the flow generated by the continuous part of the system. The dashed and dotted curves represent the image of the kicked solid curve under this flow, at times t 1 and t 2 , respectively. The green marker shows how one point, x(t 0 ) evolves under the flow, first to x(t 1 ) and then to x(t 2 ), following the isochron as it relaxes back to the limit cycle. The effect of the shear forces and the subsequent folding, caricatured by the blue arrows can clearly be seen This stretching and folding action can clearly lead to the formation of Smale horseshoes, which are well known to lead to a type of chaotic behaviour. However, horseshoes may co-exist with sinks, meaning the resulting chaotic dynamics would be transient. Wang and Young proved that under appropriate conditions, there is a set of T of positive Lebesgue measure for which the system experiences a stronger form of sustained, chaotic behaviour, characterised by the existence of a positive Lyapunov exponent for almost all initial conditions and the existence of a 'strange attractor'; see, e.g. [28][29][30].
To gain a deeper insight into the phenomenon of shear induced chaos, it is pertinent to study the isochrons of the limit cycle for the linear model (15), where the isochrons are simply lines with slope −λ/σ . In Fig. 8, we depict the isochrons and stretch and fold action of shear. The bold thin grey line at ρ = 0 is kicked, at t = t 0 , to the bold solid curve, where P (θ) = sin(θ ), as studied in [16] and this curve is allowed to evolve under the dynamics with no further kicks through the dashed curve at t = t 1 and ultimately to the dotted curve at t = t 2 , which may be considered as evolutions of the solid curve by integer multiples of the natural period of the oscillator. Every point of the dotted curve traverses the isochron it is on at t 0 until t 2 . The green marker shows an example of one such point evolving along its associated isochron. The folding effect of this is clear in the figure, and further illustrated in the video in Additional file 2. This simple model with a harmonic form for P (θ) provides insight into how strange attractors can be formed. Kicks along the isochrons or ones that map isochrons to one another will not produce strange attractors, but merely phase-shifts. What causes the stretching and folding is the variation in how far points are moved as measured in the direction transverse to the isochrons. For the linear system (15) variation in this sense is generated by any non-constant P (θ); the larger the ratio σ ε/λ, the larger the variation (see [16] for a recent discussion).
The formation of chaos in the ML model is shown in Fig. 9. Here, we plot the response to periodic pulsatile forcing, given by (14), in the (v, w) coordinate system. This clearly illustrates a folding of phase space around the limit-cycle, and is further portrayed in the video in Additional file 3. We now show how this can be understood using phase-amplitude coordinates.
Compared to the phenomenological system (15), models written in phaseamplitude coordinates as (8)-(9) have two main differences. The intrinsic dynamics (without kicks) are non-linear, and the kick terms appear in both equations forθ anḋ ρ (not justρ). Simulations of (8)-(9) for both the FHN and ML models, with ε = 0.1, show that the replacement of f 1 (θ, ρ) by σρ, dropping f 2 (θ, ρ) (which is quadratic in ρ), and setting A(θ ) = −λ does not lead to any significant qualitative change in behaviour (for a wide range of σ, λ > 0). We therefore conclude that, at least when the kick amplitude ε is not too large, it is more important to focus on the form of the forcing in the phase-amplitude coordinates. In what follows, we are interested in discovering the effects of different functional forms of the forcing term multiplying the δ-function, keeping other factors fixed. As examples, we choose those forcing terms given by transforming the FHN and the ML models into phase-amplitude coordinates. To find these functions, we first find the attracting limit cycle solution in the ML model (11) and FHN model (52) using a periodic boundary value problem solver and set up the orthonormal coordinate system around this limit cycle. Once the coordinate system is established, we evaluate the functions h(θ, ρ) and B(θ, ρ) (that appear in Eqs. (8) and (9)). For planar systems, we have simply that B(θ, ρ) = I 2 .
Using the forcing term (14), we are only considering perturbations to the voltage component of our system and thus only the first component of h, and the first column of B will make a non-trivial contribution to the dynamics. We define P 1 as the first component of h and P 2 as the first component of ζ . We wish to force each system at the same ratio of the natural frequency of the underlying periodic orbit. To ease comparison between the system with the ML forcing terms and the FHN forcing terms, we rescale θ → θ/ so that θ ∈ [0, 1) in what follows. Implementing the above choices leads toθ It is important to emphasise that P 1,2 are determined by the underlying single neuron model (unlike in the toy model (15)). As emphasised in [31], one must take care in the treatment of the state dependent 'jumps' caused by the δ-functions in (16) to accommodate the discontinuous nature of θ and ρ at the time of the kick. To solve (16), we approximate δ(t) with a normalised square pulse δ τ (t) of the form where τ 1. This means that for (n − 1)T + τ < t ≤ nT , n ∈ Z, the dynamics are governed by the linear system (θ,ρ) = (1 + σρ, −λρ). This can be integrated to obtain the state of the system just before the arrival of the next kick, (θ n , ρ n ) ≡ (θ (nT ), ρ(nT )), in the form In the interval nT < t < nT + τ and using (17) we now need to solve the system of non-linear ODEsθ Rescaling time as t = nT + τ s, with 0 ≤ s ≤ 1, and writing the solution (θ, ρ) as a regular perturbation expansion in powers of τ as (θ (s), ρ(s)) = (θ 0 (s) + τ θ 1 (s), ρ 0 (s) + τρ 1 (s)) + · · · , we find after collecting terms of leading order in τ that the pair (θ 0 (s), ρ 0 (s)) is governed by with initial conditions (θ 0 (0), ρ 0 (0)) = (θ n , ρ n ). The solution (θ 0 (1), ρ 0 (1)) ≡ (θ + n , ρ + n ) (obtained numerically) can then be taken as initial data (θ (t), ρ(t)) = (θ + n , ρ + n ) in (18)- (19) to form the stroboscopic map Note that this has been constructed using a matched asymptotic expansion, using (17), and is valid in the limit τ → 0. For weak forcing, where ε 1, P 1,2 vary slowly through the kick and can be approximated by their values at (θ n , ρ n ) so that to O(ε 2 ) Although this explicit map is convenient for numerical simulations, we prefer to work with the full stroboscopic map (22)-(23), which is particularly useful for comparing and contrasting the behaviour of different planar single neuron models with arbitrary kick strength. As an indication of the presence of chaos in the dynamics resulting from this system, we evaluate the largest Lyapunov exponent of the map (22)-(23) by numerically evolving a tangent vector and computing its rate of growth (or contraction); see e.g. [32] for details. In Fig. 10, we compare the functions P 1,2 for both the FHN and the ML models. We note that P 2 for the FHN model is near 0 for a large set of θ , whilst the same is true for P 1 for the ML model. This means that kicks in the FHN model will tend to primarily cause phase shifts, whilst the same kicks in the ML model will primarily cause shifts in amplitude.
We plot in the top row of Fig. 11 the pair (θ n , θ n+1 ), for (24)-(25) for the FHN and ML models. For the FHN model, we find that the system has Lyapunov exponent of −0.0515 < 0. For the ML model the Lyapunov exponent is 0.6738 > 0. This implies that differences in the functional forms of P 1,2 can help to explain the generation of chaos. Now that we know the relative contribution of kicks in v to kicks in (θ, ρ), it is also useful to know where kicks actually occur in terms of θ as this will determine the contribution of a train of kicks to the (θ, ρ) dynamics. In Figs. 11c and d, we plot the distribution of kicks as a function of θ . For the ML model, we observe that the kicks are distributed over all phases, while for FHN model there is a grouping of kicks around the region where P 2 is roughly zero. This means that kicks will not be felt as After transients, we observe a 1 : 1 phase-locked state for the FHN model. For a phase-locked state, small perturbations will ultimately decay as the perturbed trajectories also end up at the phase-locked state after some transient behaviour. This results in a negative largest Lyapunov exponent of −0.0515. We note the sharply peaked distribution of kick phases, which is to be expected for discrete-time systems possessing a negative largest Lyapunov exponent, since such systems tend to have sinks in this case. The phase-locked state here occurs where P 2 is small, suggesting that trajectories stay close to the limit cycle. Since kicks do not move trajectories away from cycle, there is no possibility of folding, and hence no chaotic behaviour. For the ML model, we observe chaotic dynamics around a strange attractor, where small perturbations can grow, leading to a positive largest Lyapunov exponent of 0.6738. This time, the kicks are distributed fairly uniformly across θ , and so, some kicks will take trajectories away from the limit cycle, thus leading to shear-induced folding and chaotic behaviour.

Discussion
In this paper, we have used the notion of a moving orthonormal coordinate system around a limit cycle to study dynamics in a neighbourhood around it. This phase- amplitude coordinate system can be constructed for any given ODE system supporting a limit cycle. A clear advantage of the transformed description over the original one is that it allows us to gain insight into the effect of time dependent perturbations, using the notion of shear, as we have illustrated by performing case studies of popular neural models, in two and higher dimensions. Whilst this coordinate transformation does not result in any reduction in dimensionality in the system, as is the case with classical phase reduction techniques, it opens up avenues for moving away from the weak coupling limit, where ε → 0. Importantly, it emphasises the role of the two functions P 1 (θ, ρ) and P 2 (θ ) that provide more information about inputs to the system than the iPRC alone. It has been demonstrated that moderately small perturbations can exert remarkable influence on dynamics in the presence of other invariant structures [16], which cannot be captured by a phase only description. In addition, small perturbations can accumulate if the timescale of the perturbation is shorter than the timescale of attraction back to the limit cycle. This should be given particular consideration in the analysis of neural systems, where oscillators may be connected to thousands of other units, so that small inputs can quickly accumulate.
One natural extension of this work is to move beyond the theory of weakly coupled oscillators to develop a framework for describing neural systems as networks of phase-amplitude units. This has previously been considered for the case of weakly coupled weakly dissipative networks of non-linear planar oscillators (modelled by small dissipative perturbations of a Hamiltonian oscillator) [33][34][35]. It would be interesting to develop these ideas and obtain network descriptions of the following type: with an appropriate identification of the interaction functions H 1,2 in terms of the biological interaction between neurons and the single neuron functions P 1,2 . Such phase-amplitude network models are ideally suited to describing the behaviour of the mean-field signal in networks of strongly gap junction coupled ML neurons [36,37], which is known to vary because individual neurons make transitions between cycles of different amplitudes. Moreover, in the same network weakly coupled oscillator theory fails to explain how the synchronous state can stabilise with increasing coupling strength (predicting that it is always unstable), as observed numerically. All of the above are topics of ongoing research and will be reported upon elsewhere.

B.3 FitzHugh-Nagumo Model
The FHN model is a phenomenological model of spike generation, comprising of 2 variables. The first represents the membrane potential and includes a cubic nonlinearity, whilst the second variable is a gating variable, similar to w in the ML model, which may be thought of as a recovery variable. The system is where we use the following parameter values: μ = 0.05, a = 0.9, I = 1.1, and b = 0.5.