Numerical Bifurcation Theory for HighDimensional Neural Models
 Carlo R Laing^{1}Email author
https://doi.org/10.1186/21908567413
© Laing; licensee Springer. 2014
Received: 10 April 2014
Accepted: 13 June 2014
Published: 25 July 2014
Abstract
Numerical bifurcation theory involves finding and then following certain types of solutions of differential equations as parameters are varied, and determining whether they undergo any bifurcations (qualitative changes in behaviour). The primary technique for doing this is numerical continuation, where the solution of interest satisfies a parametrised set of algebraic equations, and branches of solutions are followed as the parameter is varied. An effective way to do this is with pseudoarclength continuation. We give an introduction to pseudoarclength continuation and then demonstrate its use in investigating the behaviour of a number of models from the field of computational neuroscience. The models we consider are high dimensional, as they result from the discretisation of neural field models—nonlocal differential equations used to model macroscopic pattern formation in the cortex. We consider both stationary and moving patterns in one spatial dimension, and then translating patterns in two spatial dimensions. A variety of results from the literature are discussed, and a number of extensions of the technique are given.
Keywords
1 Introduction
where u may be a finitedimensional vector or a function (of space and time, for example), μ is a vector of parameters, and the derivative is with respect to time, t. An ambitious goal when studying a model of the form (1) is to completely understand the nature of all of its solutions, for all possible values of μ. Analytically solving (1) would give us this information, but for many functions g such a solution is impossible to find. Instead we must concentrate on finding qualitative information about the solutions of (1), and on how they change as the parameter μ is varied. Questions we would like to answer include (a) What do typical solutions do as $t\to \mathrm{\infty}$, i.e. what is the “steady state” behaviour of a neuron or neural system? (b) Are there “special” initial conditions or states that the system can be in for which the long time behaviour is different from that for a “typical” initial condition? (c) How do the answers to these questions depend on the values of μ? More particularly, can the dynamics of (1) change qualitatively if a parameter (for example, input current to a neuron) is changed? In order to try to answer these questions we often concentrate on certain types of solutions of (1). Examples include fixed points (i.e. values of u such that $g(u;\mu )=0$), periodic orbits (i.e. solutions which are periodic in time), and (in spatially dependent systems) patterned states such as plane and spiral waves. Solutions like these can be stable (i.e. all initial conditions in some neighbourhood of these solutions are attracted to them) or unstable (some nearby initial conditions leave a neighbourhood of them).
For many systems of interest, finding solutions of the type just mentioned, and their stability, can only be done numerically using a computer. Even the simplest case of finding all fixed points can be nontrivial, as there may be many or even an infinite number of them. When u is high dimensional, for example when (1) arises as the result of discretising a partial differential equation such as the cable equation on a dendrite or axon [1, 2], finding the linearisation about a fixed point, as needed to determine stability, may also be computationally challenging.
In the simplest case, that of finding fixed points of (1) and their dependence on μ, there are two main techniques. The first is integration in time of (1) from a given initial condition until a steady state is reached. Then μ is changed slightly, the process is repeated, and so on. This is conceptually simple, and many accurate integration algorithms exist, but it has several disadvantages:

There may be a long transient before a steady state is reached, requiring long time simulations.

Only stable fixed points can be found using this method.

For fixed μ there may be multiple stable fixed points, and which one is found depends on the value of the initial condition, in a way that may not be obvious.
The second technique for finding fixed points of (1) and their dependence on μ, which is the subject of this review, involves solving the algebraic equation $g(u;\mu )=0$ directly. The stability of a fixed point is then determined by the linearisation of g about it, involving partial derivatives of g, rather than by time integration of (1).
Numerical bifurcation theory involves (among other things) finding fixed points and periodic orbits of models such as (1) by solving a set of algebraic equations, determining the stability of these solutions, and following them as parameters are varied, on a computer. The field is well developed, with a number of books [3–6], book chapters [7], reports [8] and software packages available [9–14]. The point of this article is not to cover numerical bifurcation theory in general, but to demonstrate and review its use in the study of some models from the field of computational neuroscience, in particular those that are high dimensional, resulting from the discretisation of systems of nonlocal differential equations commonly employed as largescale models of neural tissue, such as the Wilson–Cowan [15] and Amari [16] models. We start in Sect. 2 by explaining the pseudoarclength continuation algorithm and show how it can be applied to a simple model. Section 3 considers both stationary and moving patterns in one spatial dimension, while Sect. 4 gives an example of a pattern in two spatial dimensions. We discuss a number of extensions in Sect. 5 and conclude in Sect. 6.
2 A LowDimensional Model
and the partial derivatives are evaluated at $({\mu}_{1}^{(i)},{u}_{1}^{(i)})$. We take ${N}_{N}$ Newton iterations and (assuming that (5) has converged) set $({\mu}_{1},{u}_{1})=({\mu}_{1}^{({N}_{N})},{u}_{1}^{({N}_{N})})$. As an initial condition we can take $({\mu}_{1}^{(0)},{u}_{1}^{(0)})=({\mu}_{0}+{\dot{\mu}}_{0}\mathrm{\Delta}s,{u}_{0}+{\dot{u}}_{0}\mathrm{\Delta}s)$, i.e. the point where the tangent line meets the dashed line in Fig. 2. This point can be regarded as the result of a linear prediction, and Newton’s method (5) regarded as a corrector of this prediction. The stability of the fixed point $({\mu}_{1},{u}_{1})$ depends on the sign of ${g}_{u}$ evaluated at this point, and this has already been calculated as the top left entry in the Jacobian J.
This process can then be continued to find as many points on the curve as required. Note the following points:

Pseudoarclength continuation follows a curve of solutions in parameter/state space and can follow such a curve through a saddlenode bifurcation, even though such bifurcations can be thought of as involving the annihilation of two solutions. This is its main advantage over natural parameter continuation, for example, which fails at such a point [4, 17].

Consider the structure of the equations being solved in (5). The first equation is $g(u,\mu )=0$, and the last is the pseudoarclength condition. This structure will be repeated in following sections.

As presented, this method will find points in one direction along the curve given by $g(u,\mu )=0$. If points in the other direction are required, simply replace the tangent vector by its negative, i.e. $({\dot{u}}_{0},{\dot{\mu}}_{0})\mapsto ({\dot{u}}_{0},{\dot{\mu}}_{0})$ when calculating $({\mu}_{1},{u}_{1})$, and then continue as above.

A given problem may have fixed points that lie on a closed curve, as in Fig. 1, or on an unbounded curve.

There are a number of refinements that could be made to this algorithm to increase its efficiency. For example, one could terminate the Newton iterations once a solution has been found to within some accuracy, rather than after a fixed number of iterations [4]. One could also adapt the stepsize as the solution curve is traced out to avoid unnecessary iterations of Newton’s method [4, 17]. Another refinement is that if u and μ are of very different magnitudes it may be beneficial to scale one of them. One way to do this is to replace (3) by${\theta}^{2}({u}_{1}{u}_{0}){\dot{u}}_{0}+({\mu}_{1}{\mu}_{0}){\dot{\mu}}_{0}\mathrm{\Delta}s=0,$(9)
where $0<\theta \ll 1$ if typical values of u are much larger than those of μ, and $1\ll \theta $ if the opposite is true.

The algorithm above involves two nested loops. The inner loop finds a point on the curve of solutions, and the outer one steps along the curve.
The interested reader is encouraged to reproduce Fig. 1 using the method outlined above, and then to explore further. (See software at http://www.massey.ac.nz/~crlaing/code.htm.)
3 OneDimensional Models
We now consider several types of pattern that occur in neural field models in one spatial dimension. Such models are used to study macroscopic pattern formation in the cortex, and take the form of nonlocal differential equations. For more background on such models see [16, 18–22], and the recent review [23]. We first consider stationary patterns.
3.1 Stationary Patterns
is a sigmoidal function, where $\beta >0$ is a steepness parameter. The variable $u(x,t)$ is the neural field at position x and time t and represents the activity of a population of neurons at that point. The function $w(xy)$ represents how neurons at position y affect those at position x, i.e. the network’s connectivity. Its evenness is a manifestation of the isotropy of the domain, i.e. that there is no preferred direction around the domain. The function f is referred to as the firing rate function, converting activity, u, to firing frequency, $f(u)$, and h is a firing threshold.
The first thing to note is that both (10) and (13) are invariant under translations, i.e. having found one solution, $u(x)$ of (13), any translate, $u(x+a)$, $a\in \mathbb{R}$, is also a solution [25]. We want only one from this infinite family of solutions, so we need a way to select only one. A simple way to do this is to consider only even functions, i.e. functions for which $u(x)=u(x)$. Many steady states of equations like (10) are found to be even, but not all of them must be [26].
for $j=0,1,2,\dots ,N1$. Note that if w is given exactly by a finite Fourier series, i.e. ${w}_{i}=0$ for $i>{N}_{F}$, then at steady state ${u}_{i}=0$ for $i>{N}_{F}$, and the truncation of (18) at $N1={N}_{F}$ will not introduce any errors, as noted by a number of authors [27–29].
is the null vector of the $N\times (N+1)$ matrix $({F}_{\mathbf{v}}{F}_{h})$ where subscripts indicate partial derivatives (i.e. ${F}_{\mathbf{v}}$ is the $N\times N$ Jacobian of F with respect to v and ${F}_{h}$ is a column vector of derivatives with respect to h) and these derivatives are evaluated at $({\mathbf{v}}_{0},{h}_{0})$. Thus once the vector (21) has been found and normalised, it can be used in (20).
is the $(N+1)\times (N+1)$ Jacobian of the augmented system, and the partial derivatives are evaluated at $({\mathbf{v}}_{1}^{(i)},{h}_{1}^{(i)})$. As above, we take ${N}_{N}$ Newton iterations and (assuming that (22) has converged) set ${\mathbf{v}}_{1}={\mathbf{v}}_{1}^{({N}_{N})}$ and ${h}_{1}={h}_{1}^{({N}_{N})}$. As an initial condition we can take ${\mathbf{v}}_{1}^{(0)}={\mathbf{v}}_{0}+{\dot{\mathbf{v}}}_{0}\mathrm{\Delta}s$ and ${h}_{1}^{(0)}={h}_{0}+{\dot{h}}_{0}\mathrm{\Delta}s$. The stability of the fixed point $({\mathbf{v}}_{1},{h}_{1})$ depends on the eigenvalues of ${F}_{\mathbf{v}}$ evaluated at this point, and this matrix has already been calculated as the top left $N\times N$ block in the Jacobian J. We find the next solution $({\mathbf{v}}_{2},{h}_{2})$ in exactly the same way as in Sect. 2, and can also use the approximation (8) if desired.
Several points should be made to end this section:

An alternative to discretising (10) using Fourier series is to discretise the spatial domain $[\pi ,\pi ]$ directly, using a uniform grid. The integral in (10) can then be evaluated using the trapezoidal rule. Alternatively, since the integral is a convolution, it can be evaluated efficiently using the fast Fourier transform and multiplication in frequency space. Using this type of discretisation it is still straightforward to restrict to even functions. Essentially, one works with the function defined on only half of the domain ($[\pi ,0]$) and imposes evenness when necessary.

As presented we can only find even solutions, and only determine stability with respect to perturbations that are also even. We will thus not detect any bifurcations leading to solutions which are not even.

If we were interested in solutions that were not even, we could include sine terms in (14), substitute into (10), and derive the differential equations governing the evolution of their coefficients. We would then have to find some way of removing the translational invariance of solutions.

The relationship between the symmetry of the system (i.e. its invariance with respect to group actions) and methods for choosing one from a continuous family of related solutions is discussed in more detail in [25, 31].

Several other references dealing with continuation of highdimensional problems in different contexts are [32–35].
We now consider moving patterns in one spatial dimension.
3.2 Moving Patterns
where $F:{\mathbb{R}}^{N+1}\times \mathbb{R}\to {\mathbb{R}}^{N+1}$. Once a pseudoarclength condition like (20) has been appended, solutions of this set of $N+2$ equations can be followed just as in Sect. 3.1. To find the stability of a front found in this way at a particular value of c, we need to find all eigenvalues of the linearisation of (28) about the front. This linearisation appears as the top left $N\times N$ block in the Jacobian of the augmented system. Note that this linearisation has a zero eigenvalue with eigenvector equal to the (discretised) spatial derivative, $\partial u/\partial \xi $. Stability of the front is determined by eigenvalues other than this one. (The stability of a wave in the original, i.e. nondiscretised, system involves determining a continuous spectrum [38], and the discrete set of eigenvalues we find is an approximation to that.)
4 TwoDimensional Models
Note the following points:

The convolution in (36) is evaluated using the twodimensional Fast Fourier Transform (FFT), i.e. the FFT of (the discretisations of) both w and $f(uh)$ are taken, they are multiplied together, and then the inverse FFT is taken. Partial derivatives are evaluated using finite difference approximations, but could also be implemented using the FFT [30].

Let us append a pseudoarclength condition to (39) and define a vector $\mathbf{V}\in {\mathbb{R}}^{2{N}^{2}+2}$ by concatenating v and λ. The resulting set of equations can be written $G(\mathbf{V})=0$. Given a solution of (39), finding the next point along this curve amounts to solving $G(\mathbf{V})=0$ using Newton’s method, i.e. iterating${\mathbf{V}}_{n+1}={\mathbf{V}}_{n}{J}^{1}G({\mathbf{V}}_{n});\phantom{\rule{1em}{0ex}}n=0,1,2,\dots ,$(40)where J is the Jacobian of G, evaluated at ${\mathbf{V}}_{n}$. For a reasonable discretisation, a large value of N is needed and hence J will be too big to store, let alone invert, so instead we write (40) as$J{\mathbf{\Delta}}_{n}=G({\mathbf{V}}_{n}),$(41)where ${\mathbf{\Delta}}_{n}={\mathbf{V}}_{n}{\mathbf{V}}_{n+1}$. This is a linear equation for the unknown ${\mathbf{\Delta}}_{n}$, but instead of solving it directly one can solve it iteratively, using for example the GMRES algorithm [41, 42]. Some implementations of the GMRES algorithm, e.g. that in Matlab, do not require the Jacobian J to be explicitly formed, only that one can evaluate the product of J with an arbitrary vector, ϕ. This can be done for a general problem in a matrixfree way with one extra evaluation of G, using the finite difference approximation$J\mathit{\varphi}\approx \frac{G({\mathbf{V}}_{n}+\u03f5\mathit{\varphi})G({\mathbf{V}}_{n})}{\u03f5},\phantom{\rule{1em}{0ex}}0<\u03f5\ll 1.$(42)
Similarly, we need the eigenvalues of J, or at least a few with the largest real part, to determine stability. The Matlab function eigs does not need J, only its product with an arbitrary vector, which can be implemented as above.
Note that for the particular problem considered here, the product of J with an arbitrary vector can be calculated exactly without the need for the approximation (42), as explained by Rankin et al. [43]. These authors used GMRES to follow stationary solutions of neural field equations in two dimensions, but the results here may be the first for travelling solutions.

As well as travelling bumps [44], patterns that appear in two spatial dimensions include stationary groups of bumps [19, 37, 43], “breathing” bumps [45], rings and rotating groups of bumps [46], waves [47], spirals [48, 49] and target patterns [45, 50]. While stationary patterns and those that propagate at a constant velocity (either translational or rotational) can be dealt with using the ideas in this section, patterns such as breathing bumps and target waves are intrinsically periodic in time, and thus must be dealt with using slightly different techniques.

We have evaluated the double integral in (31) directly using fast Fourier transforms, but some early progress on twodimensional neural fields was made using other Fourier techniques [19, 48], and see [51]. For example, suppose that the Fourier transform of w was$\frac{1}{{s}^{4}+{s}^{2}+1},$(43)where ${s}^{2}={k}_{x}^{2}+{k}_{y}^{2}$ and ${k}_{x}$ and ${k}_{y}$ are the two transform variables. Taking the twodimensional Fourier transform of (31) we obtain$\frac{\partial \stackrel{\u02c6}{u}}{\partial t}+\stackrel{\u02c6}{u}+\stackrel{\u02c6}{a}=\frac{A\stackrel{\u02c6}{f}}{{s}^{4}+{s}^{2}+1},$(44)where the hat indicates the Fourier transform. Multiplying (44) by ${s}^{4}+{s}^{2}+1$, and taking the inverse Fourier transform, using a Fourier transform identity we obtain$({\mathrm{\nabla}}^{4}{\mathrm{\nabla}}^{2}+1)(\frac{\partial u(x,y,t)}{\partial t}+u(x,y,t)+a(x,y,t))=Af(u(x,y,t)h),$(45)which is formally equivalent to (31) but only involves derivatives. The advantage of this formulation is that solutions of (36)–(37) satisfy$({\mathrm{\nabla}}^{4}{\mathrm{\nabla}}^{2}+1)(c\frac{\partial u}{\partial \xi}+u+a)=Af(uh),$(46)$c\tau \frac{\partial u}{\partial \xi}=Bua,$(47)
which only involve derivatives. Finite difference approximations to these derivatives can then be implemented using sparse matrices, thus removing the need to store and manipulate large full matrices. This idea has subsequently been used by several other groups [47, 50].
Using this method, the coupling function is assumed to be a function of only distance in two dimensions and is given by$w(r)={\int}_{0}^{\mathrm{\infty}}s{J}_{0}(rs)\stackrel{\u02c6}{w}(s)\phantom{\rule{0.2em}{0ex}}ds,$(48)where ${J}_{0}$ is the Bessel function of the first kind of order zero and $\stackrel{\u02c6}{w}(s)$ is the Fourier transform of w (in the case above, $\stackrel{\u02c6}{w}(s)=1/({s}^{4}+{s}^{2}+1)$).
We now discuss a number of extensions to the ideas presented here.
5 Extensions
5.1 Delays
We have considered differential equations where the derivatives depend on only the values of the variables at the present time. However, delays are ubiquitous in neural systems [52–56] (and elsewhere) so the study of delay differential equations naturally arises. Such systems can be numerically integrated using, for example, Matlab’s dde23, but following periodic orbits and determining the stability of fixed points is much more involved than for nondelayed systems, due to the infinitedimensional nature of the problem, even for a scalar equation. The software package DDEBIFTOOL [11] is useful for performing such calculations, and also see [57].
5.2 Global Bifurcations
i.e. both fixed points have a twodimensional unstable manifold and onedimensional stable manifold (for $c>0$). A heteroclinic connection between the fixed points occurs when the unstable manifold of one intersects the stable manifold of the other, which is a codimensionone event for this system, i.e. it will generically occur at isolated values of the parameter c. These values are those shown in Fig. 7. They can be found by “shooting”: numerically integrating (51)–(53) backwards using an initial condition on the stable manifold of one fixed point, and varying c until this trajectory intersects the unstable manifold of the other fixed point. If c is negative the dimensions of the stable and unstable manifolds are interchanged, but the argument above still applies.
Notes:

In the same way that a front can be viewed as a heteroclinic connection between two fixed points, a spatially localised pulse can be viewed as a homoclinic orbit to a fixed point. This applies whether the pulse is stationary [18, 61, 62] or moving [39] (in which case the speed appears as a parameter, as above).

Software for the continuation of homoclinic and heteroclinic orbits exists [10, 63].

The conversion of an integral equation like (28) to a differential equation via Fourier transform has been used by a number of authors [18, 61, 62, 64]. For this technique to work, the Fourier transform of the coupling function should be a rational function of the square of the transform variable.

The resulting differential equations sometimes have additional struction (they are Hamiltonian, for example) and this can be exploited in their analysis [18, 61].

Solutions which are periodic in space may also be of interest, and it may be easier to find them by considering periodic solutions of a differential equation of the form (50) rather than the equivalent integral equation (28).
5.3 Following Bifurcations
Although we have not shown any here, Hopf bifurcations can also occur in neural field models, leading to oscillatory behaviour [20, 22, 65, 66]. They are characterised by a pair of complex conjugate eigenvalues of the Jacobian passing through the imaginary axis. Curves of Hopf bifurcations can be followed as two parameters are varied in a similar way to that explained above for saddlenode bifurcations, the main difference being that in the simplest formulation there are $\mathcal{O}(3N)$ equations to solve rather than $\mathcal{O}(2N)$, as both the real and the imaginary parts of the corresponding equations have to be solved [8, 67]. Note that more sophisticated algorithms can reduce the number of equations to be solved when following both saddlenode and Hopf bifurcations [4].
5.4 Maps
Pseudoarclength continuation is a method for following solutions of algebraic equations as a parameter is varied. In this paper we have used algebraic equations which define stationary solutions of differential equations, in either a stationary or uniformly travelling coordinate frame. However, for some dynamical systems we may be interested in fixed points of a discretetime map which may correspond to, for example, periodic orbits of an underlying system [68–70] (and see below). The equations defining these fixed points are also algebraic and can thus be treated using the same methods as discussed above. Note that the criteria for stability of a fixed point of a differential equation is different from that of a fixed point of a map [58–60].
5.5 Periodic Orbits
We have only considered stationary solutions of differential equations, but many differential equations have solutions which are periodic in time, arising from, for example, a Hopf bifurcation [58, 59]. Periodic forcing, in either time or space [71, 72], can also generate solutions which are periodic in time or space, respectively. Finding and following periodic orbits can be done using pseudoarclength continuation. The main idea is as before: construct a set of algebraic equations which are satisfied by the periodic orbit. Such an orbit is represented in a finitedimensional way using, for example, a finite Fourier series expansion, or more commonly and efficiently, a piecewise polynomial function [59, 73] for each variable. This representation of the orbit is then substituted into the governing differential equations, giving a set of algebraic equations that must be satisfied. For autonomous differential equations (i.e. ones which do not explicitly depend on time) there is an invariance with respect to time shifts, in the same way that we saw invariance with respect to spatial shifts in Sect. 3. This invariance can be removed in the same way, by using a scalar “phase” condition to select one from a continuous family [73].
This technique, of using numerical integration of the underlying system over short time intervals to find both stable and unstable objects, is an example of bifurcation analysis using timesteppers [76]. It forms the basis of the “equationfree” approach, which we now discuss.
5.6 EquationFree
When studying systems such as (1) it is often assumed that one can explicitly and quickly evaluate $g(u;\mu )$ using a subroutine or by calling a Matlab “function file,” for example. However, there is no requirement that $g(u;\mu )$ be explicitly specified and as long as, given u and μ, a reasonably accurate estimate of $g(u;\mu )$ can be obtained, one can use the ideas presented above. This idea forms part of the “equationfree” approach to studying complex multiscale systems [77–79]. We now briefly summarise some of the relevant ideas.
where $\mathbf{V}\in {\mathbb{R}}^{m}$ and $m\ll n$. Determining that this is the case—and what the variable V actually is—is a complex topic that we will not go into here, but at its simplest, a particular component of V could be the average of some components of v, for example. By “effectively described” we mean that a bifurcation analysis, or numerical integration, of (71) would give similar results as performing these operations on (70), with any differences being easily explained and unimportant. We refer to (71) as the “macroscopic” or “coarse” description of our model. We would like to perform bifurcation analysis of (71), but we do not have an explicit expression for Φ. To evaluate $\Phi (\mathbf{V};\mu )$ for particular V and μ we need two operators which map between v and V. The first is a lifting operator $L:{\mathbb{R}}^{m}\to {\mathbb{R}}^{n}$ such that $L(\mathbf{V})=\mathbf{v}$. We also need a restricting operator $R:{\mathbb{R}}^{n}\to {\mathbb{R}}^{m}$ such that $R(\mathbf{v})=\mathbf{V}$. These operators must satisfy the consistency condition $R(L(\mathbf{V}))=\mathbf{V}$. Since L maps from a lowdimensional space to a highdimensional one it is often not unique.
In other words, we estimate $\Phi (\mathbf{V};\mu )$ by running a short “burst” of the microscopic system, suitably initialised, and restricting the result. Since there are normally many different ${\mathbf{v}}_{0}$ consistent with a particular ${\mathbf{V}}_{0}$, running many bursts with the different ${\mathbf{v}}_{0}$ and then averaging often results in a better estimate of $\Phi ({\mathbf{V}}_{0};{\mu}_{0})$—this is ideally done in parallel on multiple processors. Thus in principle we can evaluate $\Phi (\mathbf{V};\mu )$ for any V and μ, and this is all we need to perform bifurcation analysis (or numerical integration) of (71). Note that V can be approximately stationary even though v is not (if V is average firing rate of a network, and v contains voltages of neurons in the network, for example) so when finding “fixed points” of Φ, one may need to relax the criteria for determining when $\Phi =0$.
Often in the equationfree approach “the devil is in the details,” and a number of applications in computational neuroscience are demonstrated in [80–82]. As shown in these papers, the choice of V can often be done semiautomatically using techniques from datamining. (A long simulation of (70) is done, and the results mined to find whether there is a lowdimensional parametrisation of the data set. If so, these parameters form the components of V.) Note that while we have written the microscopic model (70) as a deterministic differential equation, it could equally well be a stochastic differential equation, or even a dynamical system in which both time and state are discrete [79, 83]. An interesting generalisation of the method presented here was shown in [84], where the analogue of the function Φ was evaluated experimentally in real time, rather than by running a computer simulation.
6 Conclusion
Numerical continuation is a powerful technique, allowing one to follow solutions of sets of algebraic equations as a parameter is varied. We have given an introduction to this technique, discussed its use in the investigation of various models arising in computational neuroscience, and demonstrated its use with a number of examples. The technique is general, but we have concentrated on highdimensional systems which arise as discretisations of neural field models. By suitable modification the technique can be used to follow bifurcations as two parameters are varied, and to follow solutions which are periodic in time, or uniformly translating or rotating. Its use in highdimensional systems has been restricted in the past by issues of memory and computational time, but with cheaper memory and faster processors continuing to be produced, we expect the technique to continue to be useful for the investigation of complex, highdimensional dynamical systems.
Declarations
Acknowledgements
I thank Stephen Coombes, Kyle Wedgwood, Daniele Avitabile and the referees for useful comments.
Authors’ Affiliations
References
 Bressloff PC: Waves in Neural Media. Springer, Berlin; 2014.MATHView ArticleGoogle Scholar
 Ermentrout GB, Terman DH 64. In Mathematical Foundations of Neuroscience. Springer, Berlin; 2010.View ArticleGoogle Scholar
 Seydel R 5. In Practical Bifurcation and Stability Analysis. Springer, Berlin; 2010.View ArticleGoogle Scholar
 Govaerts WJ: Numerical Methods for Bifurcations of Dynamical Equilibria. SIAM, Philadelphia; 2000.MATHView ArticleGoogle Scholar
 Krauskopf B, Osinga HM, GalánVioque J: Numerical Continuation Methods for Dynamical Systems: Path Following and Boundary Value Problems. Understanding Complex Systems. Springer, Berlin; 2007.View ArticleGoogle Scholar
 Allgower EL, Georg K: Introduction to Numerical Continuation Methods. SIAM, Philadelphia; 2003.MATHView ArticleGoogle Scholar
 Beyn WJ, Champneys A, Doedel E, Govaerts W, Kuznetsov YA, Sandstede B: Numerical continuation, and computation of normal forms. Handbook of Dynamical Systems 2. In Handbook of Dynamical Systems. Edited by: Fiedler B. Elsevier, Amsterdam; 2002:149–219.Google Scholar
 Salinger AG, BouRabee NM, Pawlowski RP, Wilkes ED, Burroughs EA, Lehoucq RB, Romero LA: Loca 1.0 library of continuation algorithms: theory and implementation manual. Sandia National Laboratories, SAND2002–0396; 2002. Salinger AG, BouRabee NM, Pawlowski RP, Wilkes ED, Burroughs EA, Lehoucq RB, Romero LA: Loca 1.0 library of continuation algorithms: theory and implementation manual. Sandia National Laboratories, SAND20020396; 2002.Google Scholar
 Doedel EJ, Champneys AR, Fairgrieve TF, Kuznetsov YA, Sandstede B, Wang X: “Auto97,” Continuation and bifurcation software for ordinary differential equations; 1998. Doedel EJ, Champneys AR, Fairgrieve TF, Kuznetsov YA, Sandstede B, Wang X: “Auto97,” Continuation and bifurcation software for ordinary differential equations; 1998.Google Scholar
 Dhooge A, Govaerts W, Kuznetsov YA: Matcont: a Matlab package for numerical bifurcation analysis of odes. ACM Trans. Math. Softw. 2003, 29(2):141–164.MATHMathSciNetView ArticleGoogle Scholar
 Engelborghs K, Luzyanina T, Roose D: Numerical bifurcation analysis of delay differential equations using ddebiftool. ACM Trans. Math. Softw. 2002, 28(1):1–21.MATHMathSciNetView ArticleGoogle Scholar
 Ermentrout B 14. In Simulating, Analyzing, and Animating Dynamical Systems: a Guide to XPPAUT for Researchers and Students. SIAM, Philadelphia; 2002.View ArticleGoogle Scholar
 Uecker H, Wetzel D, Rademacher JDM: pde2path  A Matlab package for continuation and bifurcation in 2D elliptic systems. arXiv:1208.3112; 2012.Google Scholar
 Dankowicz H, Schilder F: Recipes for Continuation. SIAM, Philadelphia; 2013.MATHView ArticleGoogle Scholar
 Wilson HR, Cowan JD: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 1973, 13(2):55–80.MATHView ArticleGoogle Scholar
 Amari S: Dynamics of pattern formation in lateralinhibition type neural fields. Biol. Cybern. 1977, 27(2):77–87.MATHMathSciNetView ArticleGoogle Scholar
 Doedel E, Keller HB, Kernevez JP: Numerical analysis and control of bifurcation problems (i): bifurcation in finite dimensions. Int. J. Bifurc. Chaos 1991, 1(03):493–520.MATHMathSciNetView ArticleGoogle Scholar
 Laing C, Troy W, Gutkin B, Ermentrout G: Multiple bumps in a neuronal model of working memory. SIAM J. Appl. Math. 2002, 63: 62.MATHMathSciNetView ArticleGoogle Scholar
 Laing C, Troy W: PDE methods for nonlocal models. SIAM J. Appl. Dyn. Syst. 2003, 2(3):487–516.MATHMathSciNetView ArticleGoogle Scholar
 Pinto DJ, Ermentrout GB: Spatially structured activity in synaptically coupled neuronal networks: II. lateral inhibition and standing pulses. SIAM J. Appl. Math. 2001, 62(1):226–243.MATHMathSciNetView ArticleGoogle Scholar
 Bressloff P: Spatiotemporal dynamics of continuum neural fields. J. Phys. A, Math. Theor. 2012., 45(3): Article ID 033001 Article ID 033001Google Scholar
 Coombes S: Waves, bumps, and patterns in neural field theories. Biol. Cybern. 2005, 93(2):91–108.MATHMathSciNetView ArticleGoogle Scholar
 Coombes S, beim Graben P, Potthast R, Wright J (Eds): Neural Fields: Theory and Applications. Springer, Berlin; 2014.Google Scholar
 Wimmer K, Nykamp DQ, Constantinidis C, Compte A: Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nat. Neurosci. 2014, 17: 431–439.View ArticleGoogle Scholar
 Beyn WJ, Thümmler V: Freezing solutions of equivariant evolution equations. SIAM J. Appl. Dyn. Syst. 2004, 3(2):85–116.MATHMathSciNetView ArticleGoogle Scholar
 Elvin AJ: Pattern formation in a neural field model. PhD thesis. Auckland, New Zealand, Massey University; 2008 Elvin AJ: Pattern formation in a neural field model. PhD thesis. Auckland, New Zealand, Massey University; 2008Google Scholar
 Laing C, Longtin A: Noiseinduced stabilization of bumps in systems with longrange spatial coupling. Phys. D, Nonlinear Phenom. 2001, 160(3):149–172.MATHMathSciNetView ArticleGoogle Scholar
 Ermentrout B, Folias SE, Kilpatrick ZP: Spatiotemporal pattern formation in neural fields with linear adaptation. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.Google Scholar
 Kilpatrick ZP: Coupling layers regularizes wave propagation in stochastic neural fields. Phys. Rev. E 2014., 89(2): Article ID 022706 Article ID 022706Google Scholar
 Trefethen L 10. In Spectral Methods in MATLAB. SIAM, Philadelphia; 2000.View ArticleGoogle Scholar
 Rowley CW, Kevrekidis IG, Marsden JE, Lust K: Reduction and reconstruction for selfsimilar dynamical systems. Nonlinearity 2003, 16(4):1257.MATHMathSciNetView ArticleGoogle Scholar
 Cliffe KA, Spence A, Tavener SJ: The numerical analysis of bifurcation problems with application to fluid mechanics. Acta Numer. 2000, 2000(9):39–131.MathSciNetView ArticleGoogle Scholar
 Dijkstra HA, Wubs FW, Cliffe KA, Doedel E, Dragomirescu IF, Eckhardt B, Gelfgat AY, Hazel A, Lucarini V, Salinger AG, Phipps ET, SanchezUmbria J, Schuttelaars H, Tuckerman LS, Thiele U: Numerical bifurcation methods and their application to fluid dynamics: analysis beyond simulation. Commun. Comput. Phys. 2014, 15(1):1–45.MathSciNetGoogle Scholar
 Bär M, Bangia AK, Kevrekidis IG: Bifurcation and stability analysis of rotating chemical spirals in circular domains: boundaryinduced meandering and stabilization. Phys. Rev. E 2003., 67(5): Article ID 056126 Article ID 056126Google Scholar
 Schneider TM, Gibson JF, Burke J: Snakes and ladders: localized solutions of plane couette flow. Phys. Rev. Lett. 2010., 104: Article ID 104501 Article ID 104501Google Scholar
 Lord G, Thümmler V: Computing stochastic traveling waves. SIAM J. Sci. Comput. 2012, 34(1):B24B43.MATHView ArticleGoogle Scholar
 Coombes S, Schmidt H, Laing C, Svanstedt N, Wyller J: Waves in random neural media. Discrete Contin. Dyn. Syst. 2012, 32: 2951–2970.MATHMathSciNetView ArticleGoogle Scholar
 Coombes S, Owen MR: Evans functions for integral neural field equations with heaviside firing rate function. SIAM J. Appl. Dyn. Syst. 2004, 3(4):574–600.MATHMathSciNetView ArticleGoogle Scholar
 Pinto D, Ermentrout G: Spatially structured activity in synaptically coupled neuronal networks: I. traveling fronts and pulses. SIAM J. Appl. Math. 2001, 62(1):206–225.MATHMathSciNetView ArticleGoogle Scholar
 Curtu R, Ermentrout B: Pattern formation in a network of excitatory and inhibitory cells with adaptation. SIAM J. Appl. Dyn. Syst. 2004, 3(3):191–231.MATHMathSciNetView ArticleGoogle Scholar
 Quarteroni A, Sacco R, Saleri F Texts in Applied Mathematics. In Numerical Mathematics. Springer, Berlin; 2007.Google Scholar
 Saad Y, Schultz MH: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7(3):856–869.MATHMathSciNetView ArticleGoogle Scholar
 Rankin J, Avitabile D, Baladron J, Faye G, Lloyd D: Continuation of localized coherent structures in nonlocal neural field equations. SIAM J. Sci. Comput. 2014, 36(1):B70B93.MATHMathSciNetView ArticleGoogle Scholar
 Bressloff PC, Kilpatrick ZP: Twodimensional bumps in piecewise smooth neural fields with synaptic depression. SIAM J. Appl. Math. 2011, 71(2):379–408.MATHMathSciNetView ArticleGoogle Scholar
 Folias SE, Bressloff PC: Breathing pulses in an excitatory neural network. SIAM J. Appl. Dyn. Syst. 2004, 3(3):378–407.MATHMathSciNetView ArticleGoogle Scholar
 Owen M, Laing C, Coombes S: Bumps and rings in a twodimensional neural field: splitting and rotational instabilities. New J. Phys. 2007, 9: 378.View ArticleGoogle Scholar
 Coombes S, Venkov N, Shiau L, Bojak I, Liley D, Laing C: Modeling electrocortical activity through improved local approximations of integral neural field equations. Phys. Rev. E 2007., 76(5): Article ID 051901 Article ID 051901Google Scholar
 Laing CR: Spiral waves in nonlocal equations. SIAM J. Appl. Dyn. Syst. 2005, 4(3):588–606.MATHMathSciNetView ArticleGoogle Scholar
 Huang X, Troy WC, Yang Q, Ma H, Laing CR, Schiff SJ, Wu JY: Spiral waves in disinhibited mammalian neocortex. J. Neurosci. 2004, 24(44):9897–9902.View ArticleGoogle Scholar
 Kilpatrick ZP, Bressloff PC: Spatially structured oscillations in a twodimensional excitatory neuronal network with synaptic depression. J. Comput. Neurosci. 2010, 28(2):193–209.MathSciNetView ArticleGoogle Scholar
 Laing CR: Pde methods for twodimensional neural fields. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.Google Scholar
 Coombes S, Laing C: Delays in activitybased neural networks. Philos. Trans. R. Soc., Math. Phys. Eng. Sci. 2009, 367(1891):1117–1129.MATHMathSciNetView ArticleGoogle Scholar
 Laing CR, Longtin A: Dynamics of deterministic and stochastic paired excitatoryinhibitory delayed feedback. Neural Comput. 2003, 15(12):2779–2822.MATHView ArticleGoogle Scholar
 Meijer H, Coombes S: Travelling waves in models of neural tissue: from localised structures to periodic waves. EPJ Nonlinear Biomed. Phys. 2014, 2(1):3.View ArticleGoogle Scholar
 Meijer HG, Coombes S: Travelling waves in a neural field model with refractoriness. J. Math. Biol. 2014, 68(5):1249–1268.MATHMathSciNetView ArticleGoogle Scholar
 Faye G, Faugeras O: Some theoretical and numerical results for delayed neural field equations. Phys. D, Nonlinear Phenom. 2010, 239(9):561–578.MATHMathSciNetView ArticleGoogle Scholar
 Szalai R: Knut: A continuation and bifurcation software for delaydifferential equations. [http://gitorious.org/knut/pages/Home] Szalai R: Knut: A continuation and bifurcation software for delaydifferential equations. [http://gitorious.org/knut/pages/Home]Google Scholar
 Guckenheimer J, Holmes P 42. In Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer, New York; 1983.View ArticleGoogle Scholar
 Kuznetsov YA 112. In Elements of Applied Bifurcation Theory. Springer, Berlin; 1998.Google Scholar
 Wiggins S: Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer, Berlin; 1990.MATHView ArticleGoogle Scholar
 Elvin A, Laing C, McLachlan R, Roberts M: Exploiting the Hamiltonian structure of a neural field model. Phys. D, Nonlinear Phenom. 2010, 239(9):537–546.MATHMathSciNetView ArticleGoogle Scholar
 Coombes S, Lord GJ, Owen MR: Waves and bumps in neuronal networks with axodendritic synaptic interactions. Phys. D, Nonlinear Phenom. 2003, 178(3):219–241.MATHMathSciNetView ArticleGoogle Scholar
 Champneys AR, Kuznetsov YA, Sandstede B: A numerical toolbox for homoclinic bifurcation analysis. Int. J. Bifurc. Chaos 1996, 6(05):867–887.MATHMathSciNetView ArticleGoogle Scholar
 Guo Y, Chow CC: Existence and stability of standing pulses in neural networks: I. existence. SIAM J. Appl. Dyn. Syst. 2005, 4(2):217–248.MATHMathSciNetView ArticleGoogle Scholar
 Blomquist P, Wyller J, Einevoll GT: Localized activity patterns in twopopulation neuronal networks. Phys. D, Nonlinear Phenom. 2005, 206(3):180–212.MATHMathSciNetView ArticleGoogle Scholar
 Coombes S, Schmidt H, Avitabile D: Spots: breathing, drifting and scattering in a neural field model. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.View ArticleGoogle Scholar
 Griewank A, Reddien G: The calculation of Hopf points by a direct method. IMA J. Numer. Anal. 1983, 3(3):295–303.MATHMathSciNetView ArticleGoogle Scholar
 Wasylenko TM, Cisternas JE, Laing CR, Kevrekidis IG: Bifurcations of lurching waves in a thalamic neuronal network. Biol. Cybern. 2010, 103(6):447–462.MathSciNetView ArticleGoogle Scholar
 Shiau L, Laing CR: Periodically forced piecewiselinear adaptive exponential integrateandfire neuron. Int. J. Bifurc. Chaos 2013., 23(10): Article ID 1350171 Article ID 1350171Google Scholar
 Laing CR, Coombes S: Mode locking in a periodically forced “ghostbursting” neuron model. Int. J. Bifurc. Chaos 2005, 15(04):1433–1444.MATHMathSciNetView ArticleGoogle Scholar
 Coombes S, Laing C: Pulsating fronts in periodically modulated neural field models. Phys. Rev. E 2011., 83(1): Article ID 011912 Article ID 011912Google Scholar
 Schmidt H, Hutt A, SchimanskyGeier L: Wave fronts in inhomogeneous neural field models. Phys. D, Nonlinear Phenom. 2009, 238(14):1101–1112.MATHMathSciNetView ArticleGoogle Scholar
 Doedel E, Keller HB, Kernevez JP: Numerical analysis and control of bifurcation problems (ii): bifurcation in infinite dimensions. Int. J. Bifurc. Chaos 1991, 1(04):745–772.MATHMathSciNetView ArticleGoogle Scholar
 Hastings SP: Single and multiple pulse waves for the Fitzhugh–Nagumo. SIAM J. Appl. Math. 1982, 42(2):247–260.MATHMathSciNetView ArticleGoogle Scholar
 Sánchez J, Net M: On the multiple shooting continuation of periodic orbits by Newton–Krylov methods. Int. J. Bifurc. Chaos 2010, 20(01):43–61.MATHView ArticleGoogle Scholar
 Tuckerman LS, Barkley D: Bifurcation analysis for timesteppers. The IMA Volumes in Mathematics and Its Applications 119. In Numerical Methods for Bifurcation Problems and LargeScale Dynamical Systems. Edited by: Doedel E, Tuckerman LS. Springer, New York; 2000:453–466.View ArticleGoogle Scholar
 Kevrekidis IG, Samaey G: Equationfree multiscale computation: algorithms and applications. Annu. Rev. Phys. Chem. 2009, 60(1):321–344.View ArticleGoogle Scholar
 Kevrekidis Y, Samaey G: Equationfree modeling. Scholarpedia 2010, 5(9):4847.View ArticleGoogle Scholar
 Theodoropoulos C, Qian YH, Kevrekidis IG: “Coarse” stability and bifurcation analysis using timesteppers: a reactiondiffusion example. Proc. Natl. Acad. Sci. USA 2000, 97(18):9840–9843.MATHView ArticleGoogle Scholar
 Laing CR: On the application of “equationfree modelling” to neural systems. J. Comput. Neurosci. 2006, 20(1):5–23.MATHMathSciNetView ArticleGoogle Scholar
 Laing C, Frewen T, Kevrekidis I: Coarsegrained dynamics of an activity bump in a neural field model. Nonlinearity 2007, 20(9):2127.MATHMathSciNetView ArticleGoogle Scholar
 Laing CR, Frewen T, Kevrekidis IG: Reduced models for binocular rivalry. J. Comput. Neurosci. 2010, 28(3):459–476.MathSciNetView ArticleGoogle Scholar
 Zou Y, Fonoberov VA, Fonoberova M, Mezic I, Kevrekidis IG: Model reduction for agentbased social simulation: coarsegraining a civil violence model. Phys. Rev. E 2012., 85(6): Article ID 066106 Article ID 066106Google Scholar
 Sieber J, GonzalezBuelga A, Neild SA, Wagg DJ, Krauskopf B: Experimental continuation of periodic orbits through a fold. Phys. Rev. Lett. 2008., 100: Article ID 244101 Article ID 244101Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.