Numerical Bifurcation Theory for High-Dimensional Neural Models
© Laing; licensee Springer. 2014
Received: 10 April 2014
Accepted: 13 June 2014
Published: 25 July 2014
Numerical bifurcation theory involves finding and then following certain types of solutions of differential equations as parameters are varied, and determining whether they undergo any bifurcations (qualitative changes in behaviour). The primary technique for doing this is numerical continuation, where the solution of interest satisfies a parametrised set of algebraic equations, and branches of solutions are followed as the parameter is varied. An effective way to do this is with pseudo-arclength continuation. We give an introduction to pseudo-arclength continuation and then demonstrate its use in investigating the behaviour of a number of models from the field of computational neuroscience. The models we consider are high dimensional, as they result from the discretisation of neural field models—nonlocal differential equations used to model macroscopic pattern formation in the cortex. We consider both stationary and moving patterns in one spatial dimension, and then translating patterns in two spatial dimensions. A variety of results from the literature are discussed, and a number of extensions of the technique are given.
KeywordsPseudo-arclength Continuation Bifurcation Neural field
where u may be a finite-dimensional vector or a function (of space and time, for example), μ is a vector of parameters, and the derivative is with respect to time, t. An ambitious goal when studying a model of the form (1) is to completely understand the nature of all of its solutions, for all possible values of μ. Analytically solving (1) would give us this information, but for many functions g such a solution is impossible to find. Instead we must concentrate on finding qualitative information about the solutions of (1), and on how they change as the parameter μ is varied. Questions we would like to answer include (a) What do typical solutions do as , i.e. what is the “steady state” behaviour of a neuron or neural system? (b) Are there “special” initial conditions or states that the system can be in for which the long time behaviour is different from that for a “typical” initial condition? (c) How do the answers to these questions depend on the values of μ? More particularly, can the dynamics of (1) change qualitatively if a parameter (for example, input current to a neuron) is changed? In order to try to answer these questions we often concentrate on certain types of solutions of (1). Examples include fixed points (i.e. values of u such that ), periodic orbits (i.e. solutions which are periodic in time), and (in spatially dependent systems) patterned states such as plane and spiral waves. Solutions like these can be stable (i.e. all initial conditions in some neighbourhood of these solutions are attracted to them) or unstable (some nearby initial conditions leave a neighbourhood of them).
For many systems of interest, finding solutions of the type just mentioned, and their stability, can only be done numerically using a computer. Even the simplest case of finding all fixed points can be non-trivial, as there may be many or even an infinite number of them. When u is high dimensional, for example when (1) arises as the result of discretising a partial differential equation such as the cable equation on a dendrite or axon [1, 2], finding the linearisation about a fixed point, as needed to determine stability, may also be computationally challenging.
In the simplest case, that of finding fixed points of (1) and their dependence on μ, there are two main techniques. The first is integration in time of (1) from a given initial condition until a steady state is reached. Then μ is changed slightly, the process is repeated, and so on. This is conceptually simple, and many accurate integration algorithms exist, but it has several disadvantages:
There may be a long transient before a steady state is reached, requiring long time simulations.
Only stable fixed points can be found using this method.
For fixed μ there may be multiple stable fixed points, and which one is found depends on the value of the initial condition, in a way that may not be obvious.
The second technique for finding fixed points of (1) and their dependence on μ, which is the subject of this review, involves solving the algebraic equation directly. The stability of a fixed point is then determined by the linearisation of g about it, involving partial derivatives of g, rather than by time integration of (1).
Numerical bifurcation theory involves (among other things) finding fixed points and periodic orbits of models such as (1) by solving a set of algebraic equations, determining the stability of these solutions, and following them as parameters are varied, on a computer. The field is well developed, with a number of books [3–6], book chapters , reports  and software packages available [9–14]. The point of this article is not to cover numerical bifurcation theory in general, but to demonstrate and review its use in the study of some models from the field of computational neuroscience, in particular those that are high dimensional, resulting from the discretisation of systems of nonlocal differential equations commonly employed as large-scale models of neural tissue, such as the Wilson–Cowan  and Amari  models. We start in Sect. 2 by explaining the pseudo-arclength continuation algorithm and show how it can be applied to a simple model. Section 3 considers both stationary and moving patterns in one spatial dimension, while Sect. 4 gives an example of a pattern in two spatial dimensions. We discuss a number of extensions in Sect. 5 and conclude in Sect. 6.
2 A Low-Dimensional Model
and the partial derivatives are evaluated at . We take Newton iterations and (assuming that (5) has converged) set . As an initial condition we can take , i.e. the point where the tangent line meets the dashed line in Fig. 2. This point can be regarded as the result of a linear prediction, and Newton’s method (5) regarded as a corrector of this prediction. The stability of the fixed point depends on the sign of evaluated at this point, and this has already been calculated as the top left entry in the Jacobian J.
This process can then be continued to find as many points on the curve as required. Note the following points:
Pseudo-arclength continuation follows a curve of solutions in parameter/state space and can follow such a curve through a saddle-node bifurcation, even though such bifurcations can be thought of as involving the annihilation of two solutions. This is its main advantage over natural parameter continuation, for example, which fails at such a point [4, 17].
Consider the structure of the equations being solved in (5). The first equation is , and the last is the pseudo-arclength condition. This structure will be repeated in following sections.
As presented, this method will find points in one direction along the curve given by . If points in the other direction are required, simply replace the tangent vector by its negative, i.e. when calculating , and then continue as above.
A given problem may have fixed points that lie on a closed curve, as in Fig. 1, or on an unbounded curve.
There are a number of refinements that could be made to this algorithm to increase its efficiency. For example, one could terminate the Newton iterations once a solution has been found to within some accuracy, rather than after a fixed number of iterations . One could also adapt the stepsize as the solution curve is traced out to avoid unnecessary iterations of Newton’s method [4, 17]. Another refinement is that if u and μ are of very different magnitudes it may be beneficial to scale one of them. One way to do this is to replace (3) by(9)
where if typical values of u are much larger than those of μ, and if the opposite is true.
The algorithm above involves two nested loops. The inner loop finds a point on the curve of solutions, and the outer one steps along the curve.
3 One-Dimensional Models
We now consider several types of pattern that occur in neural field models in one spatial dimension. Such models are used to study macroscopic pattern formation in the cortex, and take the form of nonlocal differential equations. For more background on such models see [16, 18–22], and the recent review . We first consider stationary patterns.
3.1 Stationary Patterns
is a sigmoidal function, where is a steepness parameter. The variable is the neural field at position x and time t and represents the activity of a population of neurons at that point. The function represents how neurons at position y affect those at position x, i.e. the network’s connectivity. Its evenness is a manifestation of the isotropy of the domain, i.e. that there is no preferred direction around the domain. The function f is referred to as the firing rate function, converting activity, u, to firing frequency, , and h is a firing threshold.
The first thing to note is that both (10) and (13) are invariant under translations, i.e. having found one solution, of (13), any translate, , , is also a solution . We want only one from this infinite family of solutions, so we need a way to select only one. A simple way to do this is to consider only even functions, i.e. functions for which . Many steady states of equations like (10) are found to be even, but not all of them must be .
for . Note that if w is given exactly by a finite Fourier series, i.e. for , then at steady state for , and the truncation of (18) at will not introduce any errors, as noted by a number of authors [27–29].
is the null vector of the matrix where subscripts indicate partial derivatives (i.e. is the Jacobian of F with respect to v and is a column vector of derivatives with respect to h) and these derivatives are evaluated at . Thus once the vector (21) has been found and normalised, it can be used in (20).
is the Jacobian of the augmented system, and the partial derivatives are evaluated at . As above, we take Newton iterations and (assuming that (22) has converged) set and . As an initial condition we can take and . The stability of the fixed point depends on the eigenvalues of evaluated at this point, and this matrix has already been calculated as the top left block in the Jacobian J. We find the next solution in exactly the same way as in Sect. 2, and can also use the approximation (8) if desired.
Several points should be made to end this section:
An alternative to discretising (10) using Fourier series is to discretise the spatial domain directly, using a uniform grid. The integral in (10) can then be evaluated using the trapezoidal rule. Alternatively, since the integral is a convolution, it can be evaluated efficiently using the fast Fourier transform and multiplication in frequency space. Using this type of discretisation it is still straightforward to restrict to even functions. Essentially, one works with the function defined on only half of the domain () and imposes evenness when necessary.
As presented we can only find even solutions, and only determine stability with respect to perturbations that are also even. We will thus not detect any bifurcations leading to solutions which are not even.
If we were interested in solutions that were not even, we could include sine terms in (14), substitute into (10), and derive the differential equations governing the evolution of their coefficients. We would then have to find some way of removing the translational invariance of solutions.
The relationship between the symmetry of the system (i.e. its invariance with respect to group actions) and methods for choosing one from a continuous family of related solutions is discussed in more detail in [25, 31].
We now consider moving patterns in one spatial dimension.
3.2 Moving Patterns
where . Once a pseudo-arclength condition like (20) has been appended, solutions of this set of equations can be followed just as in Sect. 3.1. To find the stability of a front found in this way at a particular value of c, we need to find all eigenvalues of the linearisation of (28) about the front. This linearisation appears as the top left block in the Jacobian of the augmented system. Note that this linearisation has a zero eigenvalue with eigenvector equal to the (discretised) spatial derivative, . Stability of the front is determined by eigenvalues other than this one. (The stability of a wave in the original, i.e. non-discretised, system involves determining a continuous spectrum , and the discrete set of eigenvalues we find is an approximation to that.)
4 Two-Dimensional Models
Note the following points:
The convolution in (36) is evaluated using the two-dimensional Fast Fourier Transform (FFT), i.e. the FFT of (the discretisations of) both w and are taken, they are multiplied together, and then the inverse FFT is taken. Partial derivatives are evaluated using finite difference approximations, but could also be implemented using the FFT .
Let us append a pseudo-arclength condition to (39) and define a vector by concatenating v and λ. The resulting set of equations can be written . Given a solution of (39), finding the next point along this curve amounts to solving using Newton’s method, i.e. iterating(40)where J is the Jacobian of G, evaluated at . For a reasonable discretisation, a large value of N is needed and hence J will be too big to store, let alone invert, so instead we write (40) as(41)where . This is a linear equation for the unknown , but instead of solving it directly one can solve it iteratively, using for example the GMRES algorithm [41, 42]. Some implementations of the GMRES algorithm, e.g. that in Matlab, do not require the Jacobian J to be explicitly formed, only that one can evaluate the product of J with an arbitrary vector, ϕ. This can be done for a general problem in a matrix-free way with one extra evaluation of G, using the finite difference approximation(42)
Similarly, we need the eigenvalues of J, or at least a few with the largest real part, to determine stability. The Matlab function eigs does not need J, only its product with an arbitrary vector, which can be implemented as above.
Note that for the particular problem considered here, the product of J with an arbitrary vector can be calculated exactly without the need for the approximation (42), as explained by Rankin et al. . These authors used GMRES to follow stationary solutions of neural field equations in two dimensions, but the results here may be the first for travelling solutions.
As well as travelling bumps , patterns that appear in two spatial dimensions include stationary groups of bumps [19, 37, 43], “breathing” bumps , rings and rotating groups of bumps , waves , spirals [48, 49] and target patterns [45, 50]. While stationary patterns and those that propagate at a constant velocity (either translational or rotational) can be dealt with using the ideas in this section, patterns such as breathing bumps and target waves are intrinsically periodic in time, and thus must be dealt with using slightly different techniques.
We have evaluated the double integral in (31) directly using fast Fourier transforms, but some early progress on two-dimensional neural fields was made using other Fourier techniques [19, 48], and see . For example, suppose that the Fourier transform of w was(43)where and and are the two transform variables. Taking the two-dimensional Fourier transform of (31) we obtain(44)where the hat indicates the Fourier transform. Multiplying (44) by , and taking the inverse Fourier transform, using a Fourier transform identity we obtain(45)which is formally equivalent to (31) but only involves derivatives. The advantage of this formulation is that solutions of (36)–(37) satisfy(46)(47)
which only involve derivatives. Finite difference approximations to these derivatives can then be implemented using sparse matrices, thus removing the need to store and manipulate large full matrices. This idea has subsequently been used by several other groups [47, 50].Using this method, the coupling function is assumed to be a function of only distance in two dimensions and is given by(48)
where is the Bessel function of the first kind of order zero and is the Fourier transform of w (in the case above, ).
We now discuss a number of extensions to the ideas presented here.
We have considered differential equations where the derivatives depend on only the values of the variables at the present time. However, delays are ubiquitous in neural systems [52–56] (and elsewhere) so the study of delay differential equations naturally arises. Such systems can be numerically integrated using, for example, Matlab’s dde23, but following periodic orbits and determining the stability of fixed points is much more involved than for non-delayed systems, due to the infinite-dimensional nature of the problem, even for a scalar equation. The software package DDE-BIFTOOL  is useful for performing such calculations, and also see .
5.2 Global Bifurcations
i.e. both fixed points have a two-dimensional unstable manifold and one-dimensional stable manifold (for ). A heteroclinic connection between the fixed points occurs when the unstable manifold of one intersects the stable manifold of the other, which is a codimension-one event for this system, i.e. it will generically occur at isolated values of the parameter c. These values are those shown in Fig. 7. They can be found by “shooting”: numerically integrating (51)–(53) backwards using an initial condition on the stable manifold of one fixed point, and varying c until this trajectory intersects the unstable manifold of the other fixed point. If c is negative the dimensions of the stable and unstable manifolds are interchanged, but the argument above still applies.
In the same way that a front can be viewed as a heteroclinic connection between two fixed points, a spatially localised pulse can be viewed as a homoclinic orbit to a fixed point. This applies whether the pulse is stationary [18, 61, 62] or moving  (in which case the speed appears as a parameter, as above).
The conversion of an integral equation like (28) to a differential equation via Fourier transform has been used by a number of authors [18, 61, 62, 64]. For this technique to work, the Fourier transform of the coupling function should be a rational function of the square of the transform variable.
Solutions which are periodic in space may also be of interest, and it may be easier to find them by considering periodic solutions of a differential equation of the form (50) rather than the equivalent integral equation (28).
5.3 Following Bifurcations
Although we have not shown any here, Hopf bifurcations can also occur in neural field models, leading to oscillatory behaviour [20, 22, 65, 66]. They are characterised by a pair of complex conjugate eigenvalues of the Jacobian passing through the imaginary axis. Curves of Hopf bifurcations can be followed as two parameters are varied in a similar way to that explained above for saddle-node bifurcations, the main difference being that in the simplest formulation there are equations to solve rather than , as both the real and the imaginary parts of the corresponding equations have to be solved [8, 67]. Note that more sophisticated algorithms can reduce the number of equations to be solved when following both saddle-node and Hopf bifurcations .
Pseudo-arclength continuation is a method for following solutions of algebraic equations as a parameter is varied. In this paper we have used algebraic equations which define stationary solutions of differential equations, in either a stationary or uniformly travelling coordinate frame. However, for some dynamical systems we may be interested in fixed points of a discrete-time map which may correspond to, for example, periodic orbits of an underlying system [68–70] (and see below). The equations defining these fixed points are also algebraic and can thus be treated using the same methods as discussed above. Note that the criteria for stability of a fixed point of a differential equation is different from that of a fixed point of a map [58–60].
5.5 Periodic Orbits
We have only considered stationary solutions of differential equations, but many differential equations have solutions which are periodic in time, arising from, for example, a Hopf bifurcation [58, 59]. Periodic forcing, in either time or space [71, 72], can also generate solutions which are periodic in time or space, respectively. Finding and following periodic orbits can be done using pseudo-arclength continuation. The main idea is as before: construct a set of algebraic equations which are satisfied by the periodic orbit. Such an orbit is represented in a finite-dimensional way using, for example, a finite Fourier series expansion, or more commonly and efficiently, a piecewise polynomial function [59, 73] for each variable. This representation of the orbit is then substituted into the governing differential equations, giving a set of algebraic equations that must be satisfied. For autonomous differential equations (i.e. ones which do not explicitly depend on time) there is an invariance with respect to time shifts, in the same way that we saw invariance with respect to spatial shifts in Sect. 3. This invariance can be removed in the same way, by using a scalar “phase” condition to select one from a continuous family .
This technique, of using numerical integration of the underlying system over short time intervals to find both stable and unstable objects, is an example of bifurcation analysis using timesteppers . It forms the basis of the “equation-free” approach, which we now discuss.
When studying systems such as (1) it is often assumed that one can explicitly and quickly evaluate using a subroutine or by calling a Matlab “function file,” for example. However, there is no requirement that be explicitly specified and as long as, given u and μ, a reasonably accurate estimate of can be obtained, one can use the ideas presented above. This idea forms part of the “equation-free” approach to studying complex multiscale systems [77–79]. We now briefly summarise some of the relevant ideas.
where and . Determining that this is the case—and what the variable V actually is—is a complex topic that we will not go into here, but at its simplest, a particular component of V could be the average of some components of v, for example. By “effectively described” we mean that a bifurcation analysis, or numerical integration, of (71) would give similar results as performing these operations on (70), with any differences being easily explained and unimportant. We refer to (71) as the “macroscopic” or “coarse” description of our model. We would like to perform bifurcation analysis of (71), but we do not have an explicit expression for Φ. To evaluate for particular V and μ we need two operators which map between v and V. The first is a lifting operator such that . We also need a restricting operator such that . These operators must satisfy the consistency condition . Since L maps from a low-dimensional space to a high-dimensional one it is often not unique.
In other words, we estimate by running a short “burst” of the microscopic system, suitably initialised, and restricting the result. Since there are normally many different consistent with a particular , running many bursts with the different and then averaging often results in a better estimate of —this is ideally done in parallel on multiple processors. Thus in principle we can evaluate for any V and μ, and this is all we need to perform bifurcation analysis (or numerical integration) of (71). Note that V can be approximately stationary even though v is not (if V is average firing rate of a network, and v contains voltages of neurons in the network, for example) so when finding “fixed points” of Φ, one may need to relax the criteria for determining when .
Often in the equation-free approach “the devil is in the details,” and a number of applications in computational neuroscience are demonstrated in [80–82]. As shown in these papers, the choice of V can often be done semi-automatically using techniques from data-mining. (A long simulation of (70) is done, and the results mined to find whether there is a low-dimensional parametrisation of the data set. If so, these parameters form the components of V.) Note that while we have written the microscopic model (70) as a deterministic differential equation, it could equally well be a stochastic differential equation, or even a dynamical system in which both time and state are discrete [79, 83]. An interesting generalisation of the method presented here was shown in , where the analogue of the function Φ was evaluated experimentally in real time, rather than by running a computer simulation.
Numerical continuation is a powerful technique, allowing one to follow solutions of sets of algebraic equations as a parameter is varied. We have given an introduction to this technique, discussed its use in the investigation of various models arising in computational neuroscience, and demonstrated its use with a number of examples. The technique is general, but we have concentrated on high-dimensional systems which arise as discretisations of neural field models. By suitable modification the technique can be used to follow bifurcations as two parameters are varied, and to follow solutions which are periodic in time, or uniformly translating or rotating. Its use in high-dimensional systems has been restricted in the past by issues of memory and computational time, but with cheaper memory and faster processors continuing to be produced, we expect the technique to continue to be useful for the investigation of complex, high-dimensional dynamical systems.
I thank Stephen Coombes, Kyle Wedgwood, Daniele Avitabile and the referees for useful comments.
- Bressloff PC: Waves in Neural Media. Springer, Berlin; 2014.MATHView ArticleGoogle Scholar
- Ermentrout GB, Terman DH 64. In Mathematical Foundations of Neuroscience. Springer, Berlin; 2010.View ArticleGoogle Scholar
- Seydel R 5. In Practical Bifurcation and Stability Analysis. Springer, Berlin; 2010.View ArticleGoogle Scholar
- Govaerts WJ: Numerical Methods for Bifurcations of Dynamical Equilibria. SIAM, Philadelphia; 2000.MATHView ArticleGoogle Scholar
- Krauskopf B, Osinga HM, Galán-Vioque J: Numerical Continuation Methods for Dynamical Systems: Path Following and Boundary Value Problems. Understanding Complex Systems. Springer, Berlin; 2007.View ArticleGoogle Scholar
- Allgower EL, Georg K: Introduction to Numerical Continuation Methods. SIAM, Philadelphia; 2003.MATHView ArticleGoogle Scholar
- Beyn W-J, Champneys A, Doedel E, Govaerts W, Kuznetsov YA, Sandstede B: Numerical continuation, and computation of normal forms. Handbook of Dynamical Systems 2. In Handbook of Dynamical Systems. Edited by: Fiedler B. Elsevier, Amsterdam; 2002:149–219.Google Scholar
- Salinger AG, Bou-Rabee NM, Pawlowski RP, Wilkes ED, Burroughs EA, Lehoucq RB, Romero LA: Loca 1.0 library of continuation algorithms: theory and implementation manual. Sandia National Laboratories, SAND2002–0396; 2002. Salinger AG, Bou-Rabee NM, Pawlowski RP, Wilkes ED, Burroughs EA, Lehoucq RB, Romero LA: Loca 1.0 library of continuation algorithms: theory and implementation manual. Sandia National Laboratories, SAND2002-0396; 2002.Google Scholar
- Doedel EJ, Champneys AR, Fairgrieve TF, Kuznetsov YA, Sandstede B, Wang X: “Auto97,” Continuation and bifurcation software for ordinary differential equations; 1998. Doedel EJ, Champneys AR, Fairgrieve TF, Kuznetsov YA, Sandstede B, Wang X: “Auto97,” Continuation and bifurcation software for ordinary differential equations; 1998.Google Scholar
- Dhooge A, Govaerts W, Kuznetsov YA: Matcont: a Matlab package for numerical bifurcation analysis of odes. ACM Trans. Math. Softw. 2003, 29(2):141–164.MATHMathSciNetView ArticleGoogle Scholar
- Engelborghs K, Luzyanina T, Roose D: Numerical bifurcation analysis of delay differential equations using dde-biftool. ACM Trans. Math. Softw. 2002, 28(1):1–21.MATHMathSciNetView ArticleGoogle Scholar
- Ermentrout B 14. In Simulating, Analyzing, and Animating Dynamical Systems: a Guide to XPPAUT for Researchers and Students. SIAM, Philadelphia; 2002.View ArticleGoogle Scholar
- Uecker H, Wetzel D, Rademacher JDM: pde2path - A Matlab package for continuation and bifurcation in 2D elliptic systems. arXiv:1208.3112; 2012.Google Scholar
- Dankowicz H, Schilder F: Recipes for Continuation. SIAM, Philadelphia; 2013.MATHView ArticleGoogle Scholar
- Wilson HR, Cowan JD: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 1973, 13(2):55–80.MATHView ArticleGoogle Scholar
- Amari S: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 1977, 27(2):77–87.MATHMathSciNetView ArticleGoogle Scholar
- Doedel E, Keller HB, Kernevez JP: Numerical analysis and control of bifurcation problems (i): bifurcation in finite dimensions. Int. J. Bifurc. Chaos 1991, 1(03):493–520.MATHMathSciNetView ArticleGoogle Scholar
- Laing C, Troy W, Gutkin B, Ermentrout G: Multiple bumps in a neuronal model of working memory. SIAM J. Appl. Math. 2002, 63: 62.MATHMathSciNetView ArticleGoogle Scholar
- Laing C, Troy W: PDE methods for nonlocal models. SIAM J. Appl. Dyn. Syst. 2003, 2(3):487–516.MATHMathSciNetView ArticleGoogle Scholar
- Pinto DJ, Ermentrout GB: Spatially structured activity in synaptically coupled neuronal networks: II. lateral inhibition and standing pulses. SIAM J. Appl. Math. 2001, 62(1):226–243.MATHMathSciNetView ArticleGoogle Scholar
- Bressloff P: Spatiotemporal dynamics of continuum neural fields. J. Phys. A, Math. Theor. 2012., 45(3): Article ID 033001 Article ID 033001Google Scholar
- Coombes S: Waves, bumps, and patterns in neural field theories. Biol. Cybern. 2005, 93(2):91–108.MATHMathSciNetView ArticleGoogle Scholar
- Coombes S, beim Graben P, Potthast R, Wright J (Eds): Neural Fields: Theory and Applications. Springer, Berlin; 2014.Google Scholar
- Wimmer K, Nykamp DQ, Constantinidis C, Compte A: Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nat. Neurosci. 2014, 17: 431–439.View ArticleGoogle Scholar
- Beyn W-J, Thümmler V: Freezing solutions of equivariant evolution equations. SIAM J. Appl. Dyn. Syst. 2004, 3(2):85–116.MATHMathSciNetView ArticleGoogle Scholar
- Elvin AJ: Pattern formation in a neural field model. PhD thesis. Auckland, New Zealand, Massey University; 2008 Elvin AJ: Pattern formation in a neural field model. PhD thesis. Auckland, New Zealand, Massey University; 2008Google Scholar
- Laing C, Longtin A: Noise-induced stabilization of bumps in systems with long-range spatial coupling. Phys. D, Nonlinear Phenom. 2001, 160(3):149–172.MATHMathSciNetView ArticleGoogle Scholar
- Ermentrout B, Folias SE, Kilpatrick ZP: Spatiotemporal pattern formation in neural fields with linear adaptation. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.Google Scholar
- Kilpatrick ZP: Coupling layers regularizes wave propagation in stochastic neural fields. Phys. Rev. E 2014., 89(2): Article ID 022706 Article ID 022706Google Scholar
- Trefethen L 10. In Spectral Methods in MATLAB. SIAM, Philadelphia; 2000.View ArticleGoogle Scholar
- Rowley CW, Kevrekidis IG, Marsden JE, Lust K: Reduction and reconstruction for self-similar dynamical systems. Nonlinearity 2003, 16(4):1257.MATHMathSciNetView ArticleGoogle Scholar
- Cliffe KA, Spence A, Tavener SJ: The numerical analysis of bifurcation problems with application to fluid mechanics. Acta Numer. 2000, 2000(9):39–131.MathSciNetView ArticleGoogle Scholar
- Dijkstra HA, Wubs FW, Cliffe KA, Doedel E, Dragomirescu IF, Eckhardt B, Gelfgat AY, Hazel A, Lucarini V, Salinger AG, Phipps ET, Sanchez-Umbria J, Schuttelaars H, Tuckerman LS, Thiele U: Numerical bifurcation methods and their application to fluid dynamics: analysis beyond simulation. Commun. Comput. Phys. 2014, 15(1):1–45.MathSciNetGoogle Scholar
- Bär M, Bangia AK, Kevrekidis IG: Bifurcation and stability analysis of rotating chemical spirals in circular domains: boundary-induced meandering and stabilization. Phys. Rev. E 2003., 67(5): Article ID 056126 Article ID 056126Google Scholar
- Schneider TM, Gibson JF, Burke J: Snakes and ladders: localized solutions of plane couette flow. Phys. Rev. Lett. 2010., 104: Article ID 104501 Article ID 104501Google Scholar
- Lord G, Thümmler V: Computing stochastic traveling waves. SIAM J. Sci. Comput. 2012, 34(1):B24-B43.MATHView ArticleGoogle Scholar
- Coombes S, Schmidt H, Laing C, Svanstedt N, Wyller J: Waves in random neural media. Discrete Contin. Dyn. Syst. 2012, 32: 2951–2970.MATHMathSciNetView ArticleGoogle Scholar
- Coombes S, Owen MR: Evans functions for integral neural field equations with heaviside firing rate function. SIAM J. Appl. Dyn. Syst. 2004, 3(4):574–600.MATHMathSciNetView ArticleGoogle Scholar
- Pinto D, Ermentrout G: Spatially structured activity in synaptically coupled neuronal networks: I. traveling fronts and pulses. SIAM J. Appl. Math. 2001, 62(1):206–225.MATHMathSciNetView ArticleGoogle Scholar
- Curtu R, Ermentrout B: Pattern formation in a network of excitatory and inhibitory cells with adaptation. SIAM J. Appl. Dyn. Syst. 2004, 3(3):191–231.MATHMathSciNetView ArticleGoogle Scholar
- Quarteroni A, Sacco R, Saleri F Texts in Applied Mathematics. In Numerical Mathematics. Springer, Berlin; 2007.Google Scholar
- Saad Y, Schultz MH: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7(3):856–869.MATHMathSciNetView ArticleGoogle Scholar
- Rankin J, Avitabile D, Baladron J, Faye G, Lloyd D: Continuation of localized coherent structures in nonlocal neural field equations. SIAM J. Sci. Comput. 2014, 36(1):B70-B93.MATHMathSciNetView ArticleGoogle Scholar
- Bressloff PC, Kilpatrick ZP: Two-dimensional bumps in piecewise smooth neural fields with synaptic depression. SIAM J. Appl. Math. 2011, 71(2):379–408.MATHMathSciNetView ArticleGoogle Scholar
- Folias SE, Bressloff PC: Breathing pulses in an excitatory neural network. SIAM J. Appl. Dyn. Syst. 2004, 3(3):378–407.MATHMathSciNetView ArticleGoogle Scholar
- Owen M, Laing C, Coombes S: Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J. Phys. 2007, 9: 378.View ArticleGoogle Scholar
- Coombes S, Venkov N, Shiau L, Bojak I, Liley D, Laing C: Modeling electrocortical activity through improved local approximations of integral neural field equations. Phys. Rev. E 2007., 76(5): Article ID 051901 Article ID 051901Google Scholar
- Laing CR: Spiral waves in nonlocal equations. SIAM J. Appl. Dyn. Syst. 2005, 4(3):588–606.MATHMathSciNetView ArticleGoogle Scholar
- Huang X, Troy WC, Yang Q, Ma H, Laing CR, Schiff SJ, Wu J-Y: Spiral waves in disinhibited mammalian neocortex. J. Neurosci. 2004, 24(44):9897–9902.View ArticleGoogle Scholar
- Kilpatrick ZP, Bressloff PC: Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression. J. Comput. Neurosci. 2010, 28(2):193–209.MathSciNetView ArticleGoogle Scholar
- Laing CR: Pde methods for two-dimensional neural fields. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.Google Scholar
- Coombes S, Laing C: Delays in activity-based neural networks. Philos. Trans. R. Soc., Math. Phys. Eng. Sci. 2009, 367(1891):1117–1129.MATHMathSciNetView ArticleGoogle Scholar
- Laing CR, Longtin A: Dynamics of deterministic and stochastic paired excitatory-inhibitory delayed feedback. Neural Comput. 2003, 15(12):2779–2822.MATHView ArticleGoogle Scholar
- Meijer H, Coombes S: Travelling waves in models of neural tissue: from localised structures to periodic waves. EPJ Nonlinear Biomed. Phys. 2014, 2(1):3.View ArticleGoogle Scholar
- Meijer HG, Coombes S: Travelling waves in a neural field model with refractoriness. J. Math. Biol. 2014, 68(5):1249–1268.MATHMathSciNetView ArticleGoogle Scholar
- Faye G, Faugeras O: Some theoretical and numerical results for delayed neural field equations. Phys. D, Nonlinear Phenom. 2010, 239(9):561–578.MATHMathSciNetView ArticleGoogle Scholar
- Szalai R: Knut: A continuation and bifurcation software for delay-differential equations. [http://gitorious.org/knut/pages/Home] Szalai R: Knut: A continuation and bifurcation software for delay-differential equations. [http://gitorious.org/knut/pages/Home]Google Scholar
- Guckenheimer J, Holmes P 42. In Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer, New York; 1983.View ArticleGoogle Scholar
- Kuznetsov YA 112. In Elements of Applied Bifurcation Theory. Springer, Berlin; 1998.Google Scholar
- Wiggins S: Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer, Berlin; 1990.MATHView ArticleGoogle Scholar
- Elvin A, Laing C, McLachlan R, Roberts M: Exploiting the Hamiltonian structure of a neural field model. Phys. D, Nonlinear Phenom. 2010, 239(9):537–546.MATHMathSciNetView ArticleGoogle Scholar
- Coombes S, Lord GJ, Owen MR: Waves and bumps in neuronal networks with axo-dendritic synaptic interactions. Phys. D, Nonlinear Phenom. 2003, 178(3):219–241.MATHMathSciNetView ArticleGoogle Scholar
- Champneys AR, Kuznetsov YA, Sandstede B: A numerical toolbox for homoclinic bifurcation analysis. Int. J. Bifurc. Chaos 1996, 6(05):867–887.MATHMathSciNetView ArticleGoogle Scholar
- Guo Y, Chow CC: Existence and stability of standing pulses in neural networks: I. existence. SIAM J. Appl. Dyn. Syst. 2005, 4(2):217–248.MATHMathSciNetView ArticleGoogle Scholar
- Blomquist P, Wyller J, Einevoll GT: Localized activity patterns in two-population neuronal networks. Phys. D, Nonlinear Phenom. 2005, 206(3):180–212.MATHMathSciNetView ArticleGoogle Scholar
- Coombes S, Schmidt H, Avitabile D: Spots: breathing, drifting and scattering in a neural field model. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.View ArticleGoogle Scholar
- Griewank A, Reddien G: The calculation of Hopf points by a direct method. IMA J. Numer. Anal. 1983, 3(3):295–303.MATHMathSciNetView ArticleGoogle Scholar
- Wasylenko TM, Cisternas JE, Laing CR, Kevrekidis IG: Bifurcations of lurching waves in a thalamic neuronal network. Biol. Cybern. 2010, 103(6):447–462.MathSciNetView ArticleGoogle Scholar
- Shiau L, Laing CR: Periodically forced piecewise-linear adaptive exponential integrate-and-fire neuron. Int. J. Bifurc. Chaos 2013., 23(10): Article ID 1350171 Article ID 1350171Google Scholar
- Laing CR, Coombes S: Mode locking in a periodically forced “ghostbursting” neuron model. Int. J. Bifurc. Chaos 2005, 15(04):1433–1444.MATHMathSciNetView ArticleGoogle Scholar
- Coombes S, Laing C: Pulsating fronts in periodically modulated neural field models. Phys. Rev. E 2011., 83(1): Article ID 011912 Article ID 011912Google Scholar
- Schmidt H, Hutt A, Schimansky-Geier L: Wave fronts in inhomogeneous neural field models. Phys. D, Nonlinear Phenom. 2009, 238(14):1101–1112.MATHMathSciNetView ArticleGoogle Scholar
- Doedel E, Keller HB, Kernevez JP: Numerical analysis and control of bifurcation problems (ii): bifurcation in infinite dimensions. Int. J. Bifurc. Chaos 1991, 1(04):745–772.MATHMathSciNetView ArticleGoogle Scholar
- Hastings SP: Single and multiple pulse waves for the Fitzhugh–Nagumo. SIAM J. Appl. Math. 1982, 42(2):247–260.MATHMathSciNetView ArticleGoogle Scholar
- Sánchez J, Net M: On the multiple shooting continuation of periodic orbits by Newton–Krylov methods. Int. J. Bifurc. Chaos 2010, 20(01):43–61.MATHView ArticleGoogle Scholar
- Tuckerman LS, Barkley D: Bifurcation analysis for timesteppers. The IMA Volumes in Mathematics and Its Applications 119. In Numerical Methods for Bifurcation Problems and Large-Scale Dynamical Systems. Edited by: Doedel E, Tuckerman LS. Springer, New York; 2000:453–466.View ArticleGoogle Scholar
- Kevrekidis IG, Samaey G: Equation-free multiscale computation: algorithms and applications. Annu. Rev. Phys. Chem. 2009, 60(1):321–344.View ArticleGoogle Scholar
- Kevrekidis Y, Samaey G: Equation-free modeling. Scholarpedia 2010, 5(9):4847.View ArticleGoogle Scholar
- Theodoropoulos C, Qian Y-H, Kevrekidis IG: “Coarse” stability and bifurcation analysis using time-steppers: a reaction-diffusion example. Proc. Natl. Acad. Sci. USA 2000, 97(18):9840–9843.MATHView ArticleGoogle Scholar
- Laing CR: On the application of “equation-free modelling” to neural systems. J. Comput. Neurosci. 2006, 20(1):5–23.MATHMathSciNetView ArticleGoogle Scholar
- Laing C, Frewen T, Kevrekidis I: Coarse-grained dynamics of an activity bump in a neural field model. Nonlinearity 2007, 20(9):2127.MATHMathSciNetView ArticleGoogle Scholar
- Laing CR, Frewen T, Kevrekidis IG: Reduced models for binocular rivalry. J. Comput. Neurosci. 2010, 28(3):459–476.MathSciNetView ArticleGoogle Scholar
- Zou Y, Fonoberov VA, Fonoberova M, Mezic I, Kevrekidis IG: Model reduction for agent-based social simulation: coarse-graining a civil violence model. Phys. Rev. E 2012., 85(6): Article ID 066106 Article ID 066106Google Scholar
- Sieber J, Gonzalez-Buelga A, Neild SA, Wagg DJ, Krauskopf B: Experimental continuation of periodic orbits through a fold. Phys. Rev. Lett. 2008., 100: Article ID 244101 Article ID 244101Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.