Open Access

Interface dynamics in planar neural field models

The Journal of Mathematical Neuroscience20122:9

https://doi.org/10.1186/2190-8567-2-9

Received: 21 November 2011

Accepted: 13 February 2012

Published: 2 May 2012

Abstract

Neural field models describe the coarse-grained activity of populations of interacting neurons. Because of the laminar structure of real cortical tissue they are often studied in two spatial dimensions, where they are well known to generate rich patterns of spatiotemporal activity. Such patterns have been interpreted in a variety of contexts ranging from the understanding of visual hallucinations to the generation of electroencephalographic signals. Typical patterns include localized solutions in the form of traveling spots, as well as intricate labyrinthine structures. These patterns are naturally defined by the interface between low and high states of neural activity. Here we derive the equations of motion for such interfaces and show, for a Heaviside firing rate, that the normal velocity of an interface is given in terms of a non-local Biot-Savart type interaction over the boundaries of the high activity regions. This exact, but dimensionally reduced, system of equations is solved numerically and shown to be in excellent agreement with the full nonlinear integral equation defining the neural field. We develop a linear stability analysis for the interface dynamics that allows us to understand the mechanisms of pattern formation that arise from instabilities of spots, rings, stripes and fronts. We further show how to analyze neural field models with linear adaptation currents, and determine the conditions for the dynamic instability of spots that can give rise to breathers and traveling waves.

1 Introduction

The functional organization of cortex appears to be roughly columnar, with the laminar sub-structure of each column organizing its micro-circuitry. These columns tessellate the two-dimensional cortical sheet with high density, e.g., 2,000 cm2 of human cortex contain 105 to 106 macrocolumns, comprising about 105 neurons each. Neural field models describe the mean activity of such columns by approximating the cortical sheet as a continuous excitable medium. They can generate rich patterns of emergent spatiotemporal activity and have been used to understand visual hallucinations, mechanisms for short term working memory, motion perception, the generation of electroencephalographic signals and many other neural phenomena. We refer the reader to [1, 2] for recent discussions of neural field models and their uses, and in particular to the work of Bressloff and colleagues [35] and Owen et al. [6] for results on planar systems. A minimal two-dimensional neural field model can be written as an integro-differential equation of the form
u t ( x , t ) = u ( x , t ) + R 2 w ( x x ) H ( u ( x , t ) h ) d x ,
(1)
where x R 2 and t R + . Here the variable u represents synaptic activity and the kernel w represents anatomical connectivity. The nonlinear function H represents the firing rate of the tissue and will be taken to be a Heaviside so that the parameter h is interpreted as a firing threshold. For the case of a symmetric synaptic kernel w ( x ) = w ( | x | ) , the model also has a Liapunov function [6, 7] given by
E L i a p . [ u ] = 1 2 d x d x w ( | x x | ) H ( u ( x , t ) h ) H ( u ( x , t ) h ) + h d x H ( u ( x , t ) h ) ,
(2)

which can be useful in determining the stability of equilibrium solutions.

Neural field models support traveling waves that underlie EEG signals; but also spots of localized high firing activity, which have been linked to models of working memory. These spots can become unstable and can pattern cortex with intricate structures. In Figure 1A we show results of a direct numerical simulation with a classic Mexican-hat choice for w. For further details see the discussion around Equation (20) and Section A.1 in the Appendix (for the numerical scheme). Here Equation (1) describes a single population model with short-range excitation and long-range inhibition. This minimal example nicely illustrates the ability of neural field models to generate intricate spreading labyrinthine patterns. We do not expect to find labyrinthine patterns as such in real brain activity. However, they provide a convenient (and visually striking) proxy for the generation of complex patterns of activity, that emerge spontaneously and/or can be evoked, for example in visual cortex [8]. Labyrinthine patterns are also seen when the Heaviside firing rate function is replaced by a steep sigmoid, as will be discussed later. Visual inspection suggests that much of the behavior of such patterns can be described simply by tracking the boundary between high and low states of activity. Indeed this appears to resonate with neuroscientific practice, where changes of brain activity are often of greater interest than the current brain state per se[9]. Hence it is of interest whether the dynamics of (1) can be replaced by a lower dimensional description that evolves the boundary between high and low states of activity. This programme has already been developed by Amari in his seminal article on one-dimensional models [10], where this interface reduces naturally to a point (or a set of points). However, in two spatial dimensions the interface is more naturally a closed curve (or a set of closed curves).
Fig. 1

Labyrinthine structure emerging from (1) and (20) with parameters β = 0.5 , γ = 4 and Heaviside threshold h = 0.115 . The initial spot of radius R = 12 has a mode four instability, cf. Figure 4. This is primed by perturbing R with 0.5 cos ( 4 θ ) . Rows A show u and the colorbar below indicates its values. Rows B illustrate the evolution of the interface ( u = h , golden outline) due to the normal velocity of the boundary (green arrows, to scale but enlarged by a factor 50). The Liapunov function E L i a p . of (2) is noted at all eight time steps. See also the video in Additional file 1.

The main topic of this article is the development of an equivalent interface description for neural field models of the type exemplified by (1). We show that activity patterns can be described by dynamical equations of reduced dimension, and that these depend only on the shape of the interface (requiring no knowledge of activity away from the interface). Not only is this description amenable to fast numerical simulation strategies, it allows for the construction of localized states and an analysis of their linear stability. Given the computational overheads in simulating the full neural field model this enhances our ability to study pattern formation and suggests more generally that modeling the interfaces of patterns, rather than the patterns themselves, may lead to novel, efficient descriptions of brain activity. Indeed the use of interface dynamics to analyze patterns that arise in partial differential equation models of chemical and physical systems has a strong history [11], and it is natural to translate some of the ideas and technologies from these studies to non-local neural field models. The work by Goldstein [12, 13] and Muratov [14] on pattern formation in two-dimensional excitable reaction-diffusion systems is especially relevant in this context, as both authors have developed effective descriptions of interface dynamics in terms of non-local interactions. See also the book by Desai and Kapral [15] for a recent overview.

It is worth pointing out that whether computing interface dynamics can compete with other numerical schemes will depend on the problem at hand. In general, boundaries that remain relatively short and do not pinch guarantee a speed advantage. In practice, we expect this approach to be especially relevant for (semi-) analytical work aiming at qualitative understanding, as illustrated by some of the examples presented in this article.

In Section 2 we present some of the key ideas behind an interface dynamics in the setting of a one-dimensional neural field model. This is particularly useful for introducing the definition of normal velocity from a level-set condition, as well as establishing what it means for an interface to be linearly stable. The extension of these ideas to two-dimensional systems is presented in Section 3. By writing the synaptic connectivity in terms of a linear combination of Bessel functions, we show that dynamics for the interface can be constructed in terms of line-integrals along the interface, and that the normal velocity of the interface is driven by Biot-Savart-style interactions. Thus we obtain a reduced description for the evolution of a pattern boundary solely in terms of quantities on the boundary itself. Numerical simulations of the interface dynamics are shown to be in direct correspondence with those of the full neural field model. The notion of linear stability of stationary solutions in the interface framework is fleshed out in a series of examples (for spots, rings, stripes and fronts) in Sections 4 and 5, and allows us to understand some of the mechanisms for pattern formation. In Section 6 we add linear adaptation to (1) and extend our analysis to cover this important neural phenomenon. This can introduce dynamic instabilities of stationary structures, and we calculate where breathing and drift instabilities for localized spots occur. Moreover, we use a perturbation argument to determine the shape of traveling spots that emerge beyond a drift instability and show that spots contract in the direction of propagation and widen in the orthogonal direction. Finally, in Section 7 we discuss extensions of the work in this article.

2 A one-dimensional primer

Before we develop the machinery for describing the evolution of interfaces in two-dimensional neural field models, it is informative to begin with a discussion in one dimension. In this case a minimal model can be written in the form
u t = u + ψ , ψ ( x , t ) = R w ( x y ) H ( u ( y , t ) h ) d y ,
(3)
where u = u ( x , t ) and x R , t R + . For a symmetric choice of synaptic kernel w ( x ) = w ( | x | ) , which decays exponentially, the one-dimensional model (3) is known to support a traveling front solution [16, 17] that connects a high activity state to a low activity state. In this case it is natural to define a pattern boundary as the interface between these two states. Thus we can define a moving interface (level set) according to
u ( x 0 ( t ) , t ) = const .
(4)
Here we are assuming that there is only one point on the interface, though in principle we could consider a set of points. The function x 0 = x 0 ( t ) gives the evolution of the interface. Since the high and low activity states in the neural field model are naturally distinguished by determining whether u is above or below the firing threshold, we shall take the constant on the right hand side of (4) to be h (though other choices are also possible). Differentiation of (4) gives an exact expression for the velocity of the interface in the form
x ˙ 0 = u t u x | x = x 0 ( t ) .
(5)
We can now describe the properties of a front solution solely in terms of the behavior at the front edge which separates high activity from low. To see this, let us assume that the front is such that u ( x , t ) > h for x < x 0 ( t ) and u ( x , t ) h for x x 0 ( t ) . Then (3) reduces to
u t ( x , t ) = u ( x , t ) + x x 0 ( t ) w ( y ) d y .
(6)
Introducing z = u x and differentiating (6) with respect to x gives
z t ( x , t ) = z ( x , t ) w ( x x 0 ( t ) ) .
(7)
Integrating (7) from −∞ to t (and dropping transients) yields
z ( x , t ) = e t t e s w ( x x 0 ( s ) ) d s .
(8)
We may now use the interface dynamics defined by (5) to study the speed c > 0 of a front, defined by x ˙ 0 = c . In this case x 0 ( t ) = c t , where without loss of generality we set x 0 ( 0 ) = 0 , and from (6) and (8) we have that
u t | x = x 0 ( t ) = h + w ˜ ( 0 ) , u x | x = x 0 ( t ) = w ˜ ( 1 / c ) / c ,
(9)
where
w ˜ ( λ ) = 0 e λ s w ( s ) d s .
(10)
Hence from (5) the speed of the front is given implicitly by the equation
h = w ˜ ( 0 ) w ˜ ( 1 / c ) .
(11)
To determine stability of the traveling wave we consider a perturbation of the interface and an associated perturbation of u. Introducing the notation ˆ to denote perturbed quantities, to a first approximation we will set u ˆ x | x = x ˆ 0 ( t ) = u x | x = c t , and write x ˆ 0 ( t ) = c t + δ x 0 ( t ) . The perturbation in u can be related to the perturbation in the interface by noting that both the perturbed and unperturbed boundaries are defined by the level set condition, so that u ( x 0 , t ) = h = u ˆ ( x ˆ 0 , t ) . Introducing δ u ( t ) = u | x = c t u ˆ | x = x ˆ 0 ( t ) , we thus have the condition that δ u ( t ) = 0 for all t. Integrating (6) and dropping transients gives
u ( x , t ) = e t t d s e s x x 0 ( s ) d y w ( y ) ,
(12)
and u ˆ is obtained from (12) by simply replacing x 0 by x ˆ 0 . Using the above we find that δu is given (to first order in δ x 0 ) by
δ u ( t ) = 1 c 0 d s e s / c w ( s ) [ δ x 0 ( t ) δ x 0 ( t s / c ) ] = 0 .
(13)
This has solutions of the form δ x 0 ( t ) = e λ t , where λ is defined by E ( λ ) = 0 , with
E ( λ ) = 1 w ˜ ( ( 1 + λ ) / c ) w ˜ ( 1 / c ) .
(14)

A front is stable if Re λ < 0 .

As an example consider the choice w ( x ) = exp ( | x | / σ ) / ( 2 σ ) , for which w ˜ ( λ ) = ( λ + 1 / σ ) 1 / ( 2 σ ) . In this case the speed of the wave is given from (11) as
c = σ 1 2 h 2 h ,
(15)
and
E ( λ ) = λ 1 + c / σ + λ .
(16)

The equation E ( λ ) = 0 only has the solution λ = 0 . We also have that E ( λ ) > 0 , showing that λ = 0 is a simple eigenvalue. Hence, the traveling wave front for this example is neutrally stable.

Given this preliminary exposition of interface dynamics we are now ready to describe the extension to two dimensions and to address the additional challenges that working in the plane gives rise to.

3 Interface dynamics in two dimensions

As in the one-dimensional case we will define pattern boundaries as the interface between low and high states of neural activity. To be more precise we introduce the notation B ( t ) to denote the (compact) area of activity where u h . The boundary, or interface, B ( t ) is defined by the threshold crossing condition u ( x , t ) = h . In this case the model defined by (1) takes the form
u t ( x , t ) = u ( x , t ) + ψ ( x , t ) , ψ ( x , t ) = B ( t ) w ( | x x | ) d x ,
(17)
and the Liapunov function can be written simply as
E L i a p . [ u ] = 1 2 B ( t ) d x B ( t ) d x w ( | x x | ) + h B ( t ) d x .
(18)
Note that B ( t ) does not have to be simply connected and can describe a union of many disjoint active regions. However, for clarity of exposition we shall focus on describing the evolution of an interface that is a single closed curve, as depicted in Figure 2A. The extension to multiple closed curves is straight-forward.
Fig. 2

Interface parametrization and Mexican-hat shape for synaptic kernel. A: Compact area B and boundary B . Two points on the boundary r, parametrized by s and s , are shown with their normal n and tangent t vectors. B: Mexican hat (20) with parameters β = 0.3 and γ = 3 . White contours indicate values <0, black ones ≥0, for a step size of 0.005.

It is well known that the two-dimensional model (17) can support localized states such as spots and rings [18, 19] for a Mexican-hat synaptic connectivity. Recent work in [6] has shown how to determine the stability of such solutions to angular perturbations using an Evans function approach [20]. An analogous numerical study for smooth sigmoidal firing rates can be found in [21]. These studies have highlighted, as compared to the one-dimensional model, that an extra spatial dimension can lead to azimuthal instabilities, whereby localized states can deform (or even split) into patterns with a reduced symmetry predicted by the shape of the most unstable eigenmode. Direct numerical simulations beyond such instability points have further shown the emergence of intricate spreading labyrinthine patterns like those in Figure 1, that leave behind a stable patterned state in their wake. It is our intention here to recover the Evans function results for stability, albeit using a purely interface description of dynamics, as well as to determine the nonlinear equations of motion that govern the evolution of labyrinthine (and other) structures. Moreover, by employing a representation of the synaptic connectivity in terms of a linear combination of Bessel functions, we can obtain an exact, though spatially reduced, dynamical system to describe the interface that depends solely on the shape of the interface itself. In the following, we consider kernels of the form [3]
w ( r ) = i = 1 N A i K 0 ( α i r ) , A i R , α i > 0 ,
(19)
where K 0 is the zeroth order modified Bessel function of the second kind. In particular we will employ the Mexican-hat shape obtained from
w ( r ) = 2 3 π ( K 0 ( r ) K 0 ( 2 r ) 1 γ ( K 0 ( β r ) K 0 ( 2 β r ) ) ) , β , γ > 0 ,
(20)

which is shown for β = 0.3 and γ = 3 in Figure 2B.

In an identical fashion to the way we derived an interface dynamics in one dimension in Section 2, we differentiate u ( x , t ) = h along the contour B ( t ) to obtain
x u d r d t + u t = 0 ,
(21)
where r is a point on the domain boundary B and u t and x u are evaluated on the boundary. Introducing the normal vector along the contour B as n = x u / | x u | allows us to obtain the normal velocity along the contour:
n d r d t = u t | z | ,
(22)
where z x u ( x , t ) | x = r . Using (17) we see that u and z satisfy
u t = h + B d x w ( | r x | ) ,
(23)
z t = z + x B d x w ( | x x | ) | x = r .
(24)

From the form of (22), (23), and (24), we see that the evolution of the interface does not require any knowledge of the neural field away from the contour, and rather just depends on the shape of the sets where the field is above threshold. We now exploit the choice of K 0 as basis function for constructing the synaptic kernel to show how the double integrals in (23) and (24) can be reduced to line integrals. This yields an elegant description of the interface dynamics that emphasizes how the geometry of B drives the evolution of spatiotemporal patterns. The key step in this reformulation is the use of Green’s identity. For a two-dimensional vector field F this identity is the two-dimensional version of the divergence theorem, which we write symbolically as B F = B F n . Using this first identity we may generate a second for a scalar field Ψ as B Ψ = B n Ψ .

To evaluate the right hand side of (23) and (24) it is enough to calculate B d x K 0 ( α | x x | ) and its gradient. In fact, this latter term can easily be rewritten as a line integral, using the second Green’s identity, for any choice of synaptic kernel
B d x x w ( | x x | ) = B d x x w ( | x x | ) = B d s n ( s ) w ( | x x ( s ) | ) .
(25)
Using the fact that K 0 ( α x ) satisfies the identity K 0 ( α x ) = α 2 2 K 0 ( α x ) + 2 π δ ( α x ) , as well as x w ( | x | ) = w ( | x | ) x / | x | and K 0 = K 1 , an application of Green’s first identity shows that
B d x K 0 ( α | x x | ) = 1 α 2 B d x x 2 K 0 ( α | x x | ) + 2 π B d x δ ( α | x x | ) = 1 α B d s n ( s ) x r ( s ) | x r ( s ) | K 1 ( α | x r ( s ) | ) + C 2 π α 2 .
(26)
Here C = 1 if x is within B and C = 0 if x is outside B . If x is on the boundary of B then C = 1 / 2 . Hence, for points on the boundary parametrized by s one finds
u t ( s ) = h + i = 1 N A i { B d s n ( s ) R i ( s , s ) + π α i 2 } ,
(27)
z t ( s ) = z ( s ) B d s n ( s ) w ( | r ( s ) r ( s ) | ) ,
(28)
where
R i ( s , s ) = 1 α i r ( s ) r ( s ) | r ( s ) r ( s ) | K 1 ( α i | r ( s ) r ( s ) | ) .
(29)

Note that the choice of K 0 as a basis for w is merely a convenience to allow explicit calculations. As long as we can write the connectivity function w as the divergence of a vector field then we can exploit Green’s first identity to turn the right hand side of (23) into a line integral.

From the Biot-Savart form of (29) we see that for every part i of the synaptic kernel there is an effective repulsion between two arc length positions with anti-parallel tangent vectors, although the combined effect when including all N terms will depend on the choice of the amplitudes A i . Now with (22), (27), and (28) the normal velocity on the interface can be written solely in terms of certain line-integrals around the interface. From a computational perspective this leads to a substantial advantage in that one no longer needs to solve the full non-local neural field model (17) across the entire plane, and can instead simply evolve the interface in time by discretizing the boundary and translating the points with the normal velocity from (22) in the direction of n. One possible practical disadvantage of this is the need to monitor for possible self-intersections of the evolving boundary, splitting, where a connected region pinches off into two or more disconnected regions, or indeed the creation of new boundaries where none existed before. However, numerical schemes for coping with similar situations in fluid models are well developed in the literature and it is natural to turn to these for more refined numerical schemes and ones that can automate the process of contour surgery [22, 23]. In Figure 1B we illustrate the simple numerical implementation of the interface dynamics described in Section A.2 in the Appendix, showing the effectiveness of the dimensionally reduced system at capturing the spatiotemporal pattern formation of the full model shown in Figure 1A.

Furthermore, in our calculations we have found that the key assumption of a Heaviside firing rate H ( u h ) can be relaxed to a degree without fundamentally changing the results. This is illustrated in Figure 3, where we show the evolution in time of the u = h interface and the corresponding Liapunov function. The evolution with a Heaviside firing rate H ( u h ) is shown in red, and compared with simulations of the full neural field model using more biologically realistic sigmoids 1 / { 1 + exp [ ( u h ) / σ ] } , with σ = 0.01 in green and σ = 0.02 in blue. Here σ reflects the expected width of the distribution of firing thresholds around a mean h in the neural population, with the Heaviside case corresponding to σ = 0 . Figure 3 demonstrates that for these steep sigmoids very similar labyrinthine shapes arise, and closer inspection reveals that the main differences occur at the rapidly developing rim of the structure, whereas the settled interior is nearly identical. Thus a simple adjustment of the time constant α will in this case provide a near perfect match of the emerging structures. In Figure 3 we demonstrate this with the dashed and dotted red lines, which represent the Heaviside Liapunov function computed over longer time scales (up to α t = 569.9  and  626.8 , respectively) and then scaled down to α t 550 by adjusting α. A very close match to the sigmoidal Liapunov curves (green and blue lines) is then obtained. However, for broader sigmoids we find labyrinths still resembling the Heaviside one, but with more obvious spatial changes. The video in Additional file 3 shows the σ = 0.03 case as an example. It would seem that mild deviations in the shape of the firing rate from Heaviside (to a steep sigmoidal form) are reflected more in temporal speed than in spatial shape changes.
Fig. 3

Time dependence of the u = h interface and Liapunov function. Red curves are for the Heaviside model of Figure 1, green and blue ones use a sigmoid instead. Circular labels indicate the times of the snapshots. Dashed and dotted curves scale the Heaviside one by adjusting α. See also the video in Additional file 2.

The Liapunov function can also be written in terms of line integrals: E L i a p . = 1 / 2 i = 1 N A i F i + h Γ , with
F i = 1 α i 2 B d s B d s t ( s ) t ( s ) K 0 ( α i | r ( s ) r ( s ) | ) 2 π α i 2 Γ ,
(30)
where Γ = B d x is the area of the domain above threshold and t ( s ) = d r ( s ) / d s is the tangent vector, which can also be constructed from n by an anti-clockwise rotation of π / 2 so that
n = [ 0 1 1 0 ] t .
(31)
To obtain (30) we have used the fact that
B d x B d x K 0 ( α | x x | ) 2 π α 2 Γ = 1 α 2 B d x x B d x x K 0 ( α | x x | ) = 1 α 2 B d x x B d s n ( s ) K 0 ( α | x r ( s ) | ) = 1 α 2 B d s B d s n ( s ) n ( s ) K 0 ( α | r ( s ) r ( s ) | ) ,
(32)

and the observation that n ( s ) n ( s ) = t ( s ) t ( s ) .

As well as providing a computationally useful framework for studying pattern formation, the interface dynamics including its Liapunov function is also amenable to a direct linear stability analysis. This is especially useful for understanding how the instability of localized stationary states can seed interesting structures, like the labyrinths of Figures 1 and 3. Stationarity of a solution means that the normal velocity is zero all along the boundary of the active area. This is equivalent to demanding u t = 0 on the boundary. In this case (22) reduces to
h = B 0 d x w ( | r x | ) = i = 1 N A i { B 0 d s n ( s ) R i ( s , s ) + π α i 2 } ,
(33)
where r is on the boundary parametrized by s. We use the notation B 0 to denote a stationary active region. Given the stationary interface, we can also calculate the stationary field u everywhere (away from the interface) using (17) as
u ( x ) = B 0 d x w ( | x x | ) ,
(34)

which can also be evaluated as a line integral. In order to analyze the stability of stationary solutions in the original neural field formalism defined by (1) one would perturb the field variable u and linearize to derive an eigenvalue equation or Evans function [20]. Here we determine stability using the interface dynamics, generalizing the approach described in Section 2.

Using the notation ˆ again to denote perturbed quantities, we consider small perturbations to the contour shape and denote the new interface by B ˆ . The relationship between the perturbed interface and the perturbed field is, as in one dimension, determined by the condition δ u ( t ) = 0 , where
δ u ( t ) = u ˆ | x B ˆ u | x B 0 .
(35)

The dynamics for u ˆ is given by (23) with B replaced by B ˆ . The perturbation affects the normal vector n ( s ) as well as the displacement vector r ( s ) r ( s ) that occurs in (27). Thus to evaluate (35) it is necessary to linearize K 1 about the unperturbed contour. In the case of interfaces without curvature the linear contribution to K 1 is zero. In contrast for curved interfaces an addition theorem for Bessel functions shows that there is a non-zero contribution. To clarify this statement and show how the above machinery is used in practice, we now give some explicit examples of localized solutions and their stability.

4 Localized states: spots

We consider spots to be circular stationary solutions. They are the equivalent of the bumps known in one spatial dimension [10]. For radially symmetric kernels we expect stationary circular solutions. Yet in two spatial dimensions they can undergo azimuthal instabilities, as already found in [6]. In order to obtain circular solutions we use the standard parametrization of a circle for the contour and write
r ( θ ) = R [ sin θ 1 cos θ ] , n ( θ ) = [ sin θ cos θ ] , θ [ 0 , 2 π ) .
(36)
Hence the right hand side of (33) can be calculated using
B d s n ( s ) R i ( s , s ) = 1 α i 0 2 π d θ K 1 [ α i R ( θ ) ] R ( θ ) R 2 ( 1 cos θ ) ,
(37)
where R ( θ ) = R 2 ( 1 cos θ ) . Using Graf’s formula [24] to perform the integration in (37) we obtain an implicit equation for the spot radius R in the form
h = 2 π i = 1 N A i { 1 α i 2 R α i K 1 ( α i R ) I 0 ( α i R ) } ,
(38)
where I ν ( x ) is the modified Bessel function of the first kind of order ν. A plot of the spot radius R as a function of threshold h is shown in Figure 4.
Fig. 4

Stationary radial solutions for the Mexican hat kernel (20) with β = 0.5 and various values of γ. The dotted branches are circular solutions unstable to uniform changes of size. Solid branches are stable. Dashed branches indicate azimuthal instabilities of different modes m which deform the circular solution.

To determine the relationship between a perturbed and unperturbed spot we need to examine the condition δ u ( t ) = 0 . The general solution for u (dropping transients) can be written as
u ( x , t ) = e t t d s e s ψ ( x , s ) .
(39)
For a circular solution of radius R ψ is conveniently written as
ψ ( r ) = 0 2 π 0 R w ( | r r | ) r d r d θ , r = ( r , θ ) .
(40)
Here ψ may be constructed explicitly (off the boundary), using similar line integral calculations to those for existence (above), and is given by
ψ ( r ) = 2 π R i = 1 N A i L i ( r ) ,
(41)
where
L i ( r ) = { 1 α i I 1 ( α i R ) K 0 ( α i r ) , r R , 1 α i 2 R 1 α i I 0 ( α i r ) K 1 ( α i R ) , r < R .
(42)
For perturbations in the radius of the form R ˆ = R + δ R ( θ , t ) one finds
δ u ( t ) = 0 d s e s 0 2 π d θ { 0 R ˆ ( θ , t s ) w ( | r r | ) | r = ( R ˆ ( θ , t ) , θ ) r d r 0 R w ( | r r | ) | r = ( R , θ ) r d r } = 0 d s e s 0 2 π d θ { δ R ( θ θ , t s ) R w ( R ( θ ) ) + ψ ( R ) δ R ( θ , t ) } .
(43)
Using the above we see that δ u ( t ) = 0 has solutions of the form δ R ( θ , t ) = cos m θ e λ m t , where
λ m = 1 + W m ,
(44)
and
W m = R | ψ ( R ) | 0 2 π d θ cos ( m θ ) w ( R ( θ ) ) = i = 1 N A i K m ( α i R ) I m ( α i R ) i = 1 N A i K 1 ( α i R ) I 1 ( α i R ) .
(45)
Note that since W m is real λ m R . A mode-m instability will occur if λ m > 0 , which recovers the result in [6] obtained using an Evans function approach. The possibility of such azimuthal instabilities is indicated on the solution branches shown in Figure 4 (and we would expect the emergence of solution branches with D m symmetry from the points marked by m). Interestingly we can see from (44) and (45) that the mode with m = 1 is neutrally stable. For a perturbation to a circular boundary of the form δ R ( θ , t ) = ϵ m ( t ) cos ( m θ ) , ϵ m = ϵ e λ m t and ϵ 1 , the perturbation of the normal velocity v n is
v n = ϵ m cos ( m θ ) ( 1 + W m ) .
(46)
To calculate the Liapunov function for an unperturbed spot we evaluate (30) using
1 α i 2 0 2 π d θ 0 2 π d θ R 2 cos ( θ θ ) K 0 ( α i R ( θ θ ) ) = 4 π 2 α i 2 R 2 K 1 ( α i R ) I 1 ( α i R ) G i .
(47)
Hence
E L i a p . = 1 2 i = 1 N A i ( G i 2 π α i 2 π R 2 ) + h π R 2 .
(48)

The zeros of the first derivative of E L i a p . with respect to R give the stationary circular solutions, including the trivial case R = 0 , as expected.

5 Rings, fronts and stripes

In this section we show how to treat other simple interface shapes, namely rings, fronts and stripes, and determine their stability. We recover previous results in [6] for rings (obtained with an Evans function method), whilst calculations for the other structures are shown to be straight-forward using the interface dynamics approach.

5.1 Rings

Rings can be considered as the difference of two spots, one with radius R 1 and the other with radius R 2 > R 1 . Introducing ψ ( r , R ) = 2 π R i A i L i ( r , R ) , where L i ( r , R ) is given by the right hand side of (41), we have that u ( r ) = ψ ( r , R 2 ) ψ ( r , R 1 ) . Enforcing the threshold conditions u ( R 2 ) = h = u ( R 1 ) gives a pair of equations that determine ( R 1 , R 2 ) . To establish stability the outer contour is perturbed exactly as in the previous section: R 2 ( θ ) = R 2 + a e λ t cos m θ , for some small amplitude a. For the inner contour we similarly write R 1 ( θ ) = R 1 + b e λ t cos m θ . We now generate δ u ( t ) on each of the two boundaries and equate these to zero to generate two equations for the pair of unknown amplitudes ( a , b ) . Demanding that this pair of equations has a non-trivial solution generates an equation for λ in the form E m ( λ ) = 0 where E m ( λ ) = | ( 1 + λ ) I 2 A m ( λ ) | and
[ A m ( λ ) ] μ ν = R ν | u ( R ν ) | 0 2 π d θ cos ( m θ ) w ( R μ 2 + R ν 2 2 R μ R ν cos θ ) = R ν | u ( R ν ) | i = 1 N A i [ K m ( α i R μ ) I m ( α i R ν ) H ( R μ R ν ) + K m ( α i R ν ) I m ( α i R μ ) H ( R ν R μ ) ] ,
(49)

for μ , ν = 1 , 2 .

At a bifurcation point defined by Re λ m = 0 we expect a ring to split into m spots. In Figure 5 we plot solution branches for ring solutions as a function of h for the Mexican-hat model defined by (20), and flag the types of instability that can occur. Of the two solution branches the lower one is unstable with respect to radial perturbations, whereas the upper branch is subject to azimuthal destabilizations. In Figure 6 we show a two-dimensional plot of an unstable ring solution, and the emergent structure of five bumps seen beyond instability, consistent with the predictions of our linear stability analysis.
Fig. 5

Existence and stability of ring solutions with inner radius R 1 , with the indicated γ and β = 0.5 in (20). The lower branch is always unstable (dotted lines). On the upper branch the stable ring (solid lines) can lose stability with dominant mode = 2 (solid circles), 3 (triangles), 4 (squares), etc. for decreasing h.

Fig. 6

Direct numerical simulation of u for a ring solution ( R 1 = 7 , R 2 = 8.629 ) perturbed with a linear combination of modes 0 , 1 , 2 , , 8 . For the given parameters β = 0.5 , γ = 3 and Heaviside threshold h = 0.0549 , mode m = 5 is most unstable, cf. Figure 5. See also the video in Additional file 4.

5.2 Fronts

Calculating stationary planar fronts is straightforward, since the normal vector n ( s ) is orthogonal to the displacement vector r ( s ) r ( s ) and the line integral on the right hand side of (33) is zero. Hence we have the existence condition h = i A i π / α i 2 and we note that h lies exactly halfway between the two possible steady states of u. To investigate the properties of a planar traveling front of speed c it is informative to treat the simple case w ( x ) = K 0 ( x ) / ( 2 π ) . We then have that u t = h + 1 / 2 and
| z | = 0 d s e s d y 1 2 π K 0 ( y 2 + ( c s ) 2 ) = 1 2 1 1 + c , c > 0 .
(50)
Hence using (22), the normal velocity is given by
c = 1 2 h 2 h .
(51)

To determine stability we consider a front along y = 0 and write the perturbed front as y ˆ = y ˆ ( x , t ) .

For simplicity we shall focus on a stationary front with c = 0 . In this case we may construct δ u ( t ) as
δ u ( t ) = 0 d s e s d x [ y ˆ ( x , t ) y ˆ ( x x , t s ) ] w ( x ) .
(52)
The equation δ u ( t ) = 0 (for all x) has solutions of the form y ˆ = ϵ cos ( k x ) e λ t , where
λ = 1 + w ˆ ( k ) w ˆ ( 0 ) , w ˆ ( k ) = d x w ( x ) cos ( k x ) .
(53)
For a modified Bessel function one has
d x K 0 ( α x ) cos ( k x ) = 1 α 2 + k 2 .
(54)
Hence for the simple example above, the stationary planar front is stable due to w ˆ ( k ) w ˆ ( 0 ) . However, for a Mexican-hat function it is possible that w ˆ ( k ) > w ˆ ( 0 ) for some band of wave numbers, and we would expect instabilities in this case. Figure 7 shows λ = λ ( k ) for a stationary front with the Mexican-hat function (20), from which the critical band of wave numbers can easily be read off.
Fig. 7

Spectra for the Mexican-hat function (20) with β = 1 / 2 and γ = 4 : for a stationary front (blue line, 2 h = 1 1 / ( γ β 2 ) ), and varicose (green line) and sinuous (red line) stripes of width D = 7 , respectively.

5.3 Stripes

A stripe may be considered as the active area in between two interacting stationary fronts. For two interfaces that define a stripe to be along y = y 1 and y = y 2 , then
u ( x , y ) = d x y 1 y y 2 y d y w ( ( x ) 2 + ( y ) 2 ) .
(55)
For a stripe of constant width D, such that y 2 y 1 = D for all x, the existence condition u ( x , y 1 ) = h = u ( x , y 1 + D ) takes the simple form
h = d x 0 D d y w ( x 2 + y 2 ) = i A i π α i 2 [ 1 e α i D ] .
(56)
An example of D = D ( h ) is shown in Figure 8 for a Mexican-hat function.
Fig. 8

Stripe widths D for different values of γ with a Mexican hat interaction given by (20) and β = 0.5 .

To determine stability we consider perturbations on each of the two stripe boundaries and construct δ u i on each as
δ u i ( t ) = 0 d s e s d x { y ˆ 2 ( x x , t s ) y ˆ i ( x , t ) y ˆ 1 ( x x , t s ) y ˆ i ( x , t ) 0 D } d y w ( ( x ) 2 + ( y ) 2 ) ,
(57)
for i = 1 , 2 . When considering small perturbations there is some ambiguity in expanding (57) depending on the relative size of the perturbations on each boundary. However, this at most amounts to a sign difference, which means that we may expand (57) to obtain
δ u i ( t ) = 0 d s e s d x w ( ( x ) 2 + D 2 ) [ y ˆ 2 ( x x , t s ) y ˆ i ( x , t ) ] ± 0 d s e s d x w ( x ) [ y ˆ 1 ( x x , t s ) y ˆ i ( x , t ) ] .
(58)
The equations δ u i = 0 admit solutions of the form y ˆ i = y i + ϵ i cos ( k x ) e λ t . For equal amplitude perturbations, | ϵ 1 | = | ϵ 2 | = ϵ , there are two branches of eigenvalues given by λ = λ ± , where
λ ± = 1 + F ± ( k , D ) F + ( 0 , D ) .
(59)
Here F ± ( k , D ) = w ˆ ( k , 0 ) w ˆ ( k , D ) and
w ˆ ( k , D ) = d x w ( x 2 + D 2 ) cos ( k x ) = i A i e D α i 2 + k 2 α i 2 + k 2 .
(60)
The branch with λ = λ + corresponds to sinuous perturbations with ( ϵ 1 , ϵ 2 ) = ϵ ( 1 , 1 ) and the branch with λ = λ corresponds to varicose perturbations with ( ϵ 1 , ϵ 2 ) = ϵ ( 1 , 1 ) . Since λ + > λ then sinuous instabilities dominate over varicose. Note that as D we recover the existence and stability results for a stationary front as expected. Examples of sinuous and varicose instabilities (as predicted from our analysis) are shown in Figure 9.
Fig. 9

Direct numerical simulation of u for varicose (rows A) and sinuous (rows B) instabilities ± cos ( k y ) with k = 0.44272 of a stripe of width D = 6.08 for parameters β = 0.5 , γ = 4.0 and Heaviside threshold h = 0.03 . See also the video in Additional file 5.

6 Neural field models with linear adaptation

In real cortical tissues there are an abundance of metabolic processes whose combined effect is to modulate neuronal response. It is convenient to think of these processes in terms of local feedback mechanisms that modulate synaptic currents. For example, it is known that a model of synaptic depression can destabilize a spot in favor of a traveling pulse [25]. Here we consider a simple linear model of adaptation that is known to lead to instabilities of localized structures [26]. In this case the original neural field model is modified according to
1 α u t = u + ψ g a , a t = u a ,
(61)
with g > 0 . Here ψ is the second term on the right hand side of (3) and (1) in one and two dimensions, respectively. The linearity of the equations of motion means that we may obtain the trajectory for ( u , a ) in closed form as
u ( , t ) = t d s η ( t s ) ψ ( , s ) , a ( , t ) = t d s e ( t s ) u ( , s ) ,
(62)
where
η ( t ) = α λ λ + { ( 1 λ + ) e λ + t ( 1 λ ) e λ t } .
(63)
Here
λ ± = 1 + α ± ( 1 + α ) 2 4 α ( 1 + g ) 2 .
(64)
As an example let us compute the speed ( c > 0 ) and stability of a front in the one-dimensional model discussed in Section 2 with the inclusion of a linear adaptation current. In this case we have that
u t | x = x 0 ( t ) = c h / σ ,
(65)
u x | x = x 0 ( t ) = 1 c 0 d s η ( s / c ) w ( s ) = α c ( λ λ + ) [ ( 1 λ + ) w ˜ ( λ + / c ) ( 1 λ ) w ˜ ( λ / c ) ] .
(66)
Note that to calculate u t we have used the result that h = u ( c t , t ) = 0 d s η ( s ) c s d y w ( y ) . Hence, from (5), the speed is determined implicitly by
h = α 2 c / σ + 1 ( c / σ + λ + ) ( c / σ + λ ) ,
(67)
which may be rearranged to give
c σ = 1 2 [ 1 + α α 2 h ± ( 1 + α α 2 h ) 2 4 α ( 1 + g 1 2 h ) ] .
(68)
The eigenvalue equation for stability can also be calculated, generalizing the analysis of Section 2, as E ( λ ) = 1 H ( λ ) / H ( 0 ) , where
H ( λ ) = α λ λ + { ( 1 λ + ) w ˜ ( ( λ + λ + ) / c ) ( 1 λ ) w ˜ ( ( λ + λ ) / c ) } .
(69)
On the branch with c = 0 where 2 h ( 1 + g ) = 1 , defining a stationary front, we find that
E ( λ ) = λ ( λ + λ + + λ λ + λ ) ( λ + λ + ) ( λ + λ ) ,
(70)

which has zeros when λ = 0 and λ = k + k ( k + + k ) = α g 1 . Hence, the stationary front changes from stable to unstable as α is increased through α c = 1 / g .

In two-dimensions it is straight forward to construct a stationary spot of radius R. This radius is determined by (38) under the replacement h h ( 1 + g ) , so that
h ( 1 + g ) = 2 π R i = 1 N A i K 0 ( α i R ) I 1 ( α i R ) / α i F ( R ) .
(71)
Here ψ may be constructed explicitly off the boundary, and is given by Equation (41), so that u ( r ) = ψ ( r ) / ( 1 + g ) . A saddle-node bifurcation of stationary spots occurs at R = R c where F ( R c ) = 0 . Hence, in the ( h , g ) plane stationary solutions only exist for h < F ( R c ) / ( 1 + g ) . Under variation in α we expect the emergence of a drifting spot. Beyond a drift instability, we expect to be able to find traveling spots that move in some direction c with constant speed c = | c | . These can be constructed as stationary solutions in a co-moving frame ξ = x + c t , and satisfy
1 α c ξ u = u + ψ g a , c ξ a = u a .
(72)
We may write the velocity in terms of local co-ordinates on the moving interface as c = c n n + c t t , where c n ( s ) = c n ( s ) is the normal velocity and c t ( s ) = c t ( s ) the tangential velocity at a point on the interface. Taking the cross product of c and t (and using n × t = 1 ) shows that c n = c × t . Hence, the condition for stationary propagation, with c = r ˙ , is
n ( s ) r ˙ ( s ) = c × d r ( s ) d s / | d r ( s ) d s | , u ( ξ ) | ξ = r = h .
(73)
In general this is a hard equation to solve in closed form. However, to obtain an estimate of the speed and shape of a spot beyond a point of instability it is enough to consider a weak distortion of a traveling circular wave [27]. Choosing c = c ( 1 , 0 ) and writing ξ = ( ξ 1 , ξ 2 ) , and assuming that ψ is rotationally symmetric means that we may construct a solution in the form
u ( ξ 1 , ξ 2 ) = 1 c ξ 1 d y η ( ( ξ 1 y ) / c ) ψ ( y 2 + ξ 2 2 ) .
(74)
We note that the threshold condition u = h for a circular spot ( ξ 1 2 + ξ 2 2 = R 2 ) can only strictly be met for the case c = 0 , since the right hand side of (74) depends on the plane polar angle through ξ 1 = R cos θ . For this case we may construct the equation δ u ( t ) = 0 to determine the eigenvalues λ m that occur in perturbations of the form δ R ( θ , t ) = ϵ e λ m t cos ( m θ ) as solutions to E m ( λ ) = 0 , where
E m ( λ ) = 1 η ˜ ( λ ) ( 1 + g ) W m ,
(75)
where W m is given by (44) and the Laplace transform of η is easily calculated as
η ˜ ( λ ) = α ( 1 + λ ) ( λ + λ + ) ( λ + λ ) .
(76)
The eigenvalues for m = 1 are determined by η ˜ ( λ ) = 1 / ( 1 + g ) , which has two solutions: λ = 0 and λ = α g 1 . Hence this mode becomes unstable as g increase through 1 / α . It is also possible that a breathing instability may arise for the mode with m = 0 . Note that another way to generate breathing solutions is to include localized inputs [3, 4], breaking the homogeneous structure of the network. Substitution of λ = i ω into (75) gives the condition for this instability as:
W 0 = 1 + α α ( 1 + g ) , g 1 α ,
(77)
with emergent frequency ω = α g 1 . For m 2 splitting instabilities can be determined by setting λ = 0 in (75) to give the conditions W m = 1 . An example of a breather arising as an instability of a spot is shown in Figure 10 (and numerical simulations confirm the predicted value of the emergent frequency around the bifurcation point).
Fig. 10

Breathing instability from direct numerical simulation with parameters α = 5 , β = 0.5 , γ = 4 , coupling strength g = 0.5 and Heaviside threshold h = 0.12 / ( 1 + g ) . Initial data was constructed from a stationary spot solution by modifying the adaptation variable to a top-hat shape. Top and bottom rows: snapshots of activation u and adaptation a, respectively, at times indicated by orange lines in the middle row. Middle row: above threshold radii as function of time. See also the video in Additional file 6.

Anticipating a small c discussion we Laplace transform (74) in the ξ 1 variable to obtain
u ˜ ( λ , ξ 2 ) = 1 1 + g ( 1 + 1 α ( 1 + g ) [ ( α g 1 ) c λ α ( c λ ) 2 + ( c λ ) 3 + ] ) ψ ˜ ( λ , ξ 2 ) ,
(78)
which we then inverse transform to obtain
u ( ξ 1 , ξ 2 ) = 1 1 + g { 1 + 1 α ( 1 + g ) [ c ( α g 1 ) ξ 1 α c 2 2 ξ 1 2 + c 3 3 ξ 1 3 + ] } ψ ( | ξ | ) .
(79)
At the point where g = 1 / α , the shape of the spot deviates from circular with an amplitude that depends on quadratic and higher powers of c. Thus not only is there a breathing bifurcation at g = 1 / α , but also a drifting instability to a traveling spot whose shape, determined from (79) by u ( r ) = h , can be written in the form r ( θ ) = R ( θ ) ( cos θ , sin θ ) with
R ( θ ) = R + m 2 c m a m cos m θ .
(80)

Here R is determined by (71). A further weakly nonlinear analysis to understand the competition between drifting and breathing at g = 1 / α is beyond the scope of this article.

For g > 1 / α and dropping terms of O ( c 2 ) in (79) we see that there are solutions to u ( r ) = h of the form R ( θ ) = R + a 1 c cos θ , where a 1 = ( 1 α g ) / ( α ( 1 + g ) 2 ) < 0 . The amplitudes of higher order modes may be constructed in a similar fashion, i.e., by balancing terms at each order in c in u ( r ) = h using (79) and (80). However, it is not our intention to pursue these lengthy calculations here. Rather to give a feel of the shape of a traveling spot we plot the level set where u ( ξ 1 , ξ 2 ) = h using (79) in Figure 11D including terms up to c 3 . This nicely illustrates that spots contract in the direction of propagation and widen in the orthogonal direction, and provides a theoretical explanation for the shape of traveling spots recently reported in [28]. With the aid of direct numerical simulations we have also explored the scattering properties of traveling spots. In common with previous numerical studies of planar neural fields with some form of adaptation, we find that such structures can behave as quasi-particles in the sense that they can scatter like dissipative solitons [29]. An example of such scattering is shown in Figure 11. Here we see a repulsive interaction which repels the spots away from each other if they approach too closely.
Fig. 11

Collision of two pulses from direct numerical simulation with parameters as in Figure 10. Columns A and C show u and a, respectively. Column B shows the u = h interfaces, velocities (arrows to scale but enlarged by a factor of four), and dotted centerlines as visual guides. See also the video in Additional file 7. Column D predicts to O ( c 3 ) the shape of a traveling pulse for different c.

7 Discussion

In this article we have formulated an interface dynamics for planar neural fields with a Heaviside firing rate. This has allowed us to (i) develop an economical computational framework for the evolution of spatiotemporal patterns, and (ii) perform linear stability analyses of localized structures. For simplicity we have focused on single population models. However, the extension to population models that treat the dynamics of both excitatory and inhibitory populations is straightforward. Perhaps a more interesting extension is to consider neural field models that incorporate feature selectivity such as that observed in visual cortex for orientation [30], spatial frequency [31] and texture [32]. Denoting this feature label by χ then all of these models are expressed in terms of some non-local integro-differential equation for u ( r , χ , t ) . We note that the notion of an interface is still well defined and that the level set condition u ( r , χ , t ) = h gives a constraint between local geometrical data and features. As an alternative to simulating the neural field models an interface approach (incorporating feature space) may be more useful for understanding how local data can be integrated into global geometrical structures, as advocated in the neurogeometry framework of Petitot [33] (say for understanding models of contour completion in models of primary visual cortex where the feature space is orientation). The extension of this work to treat sigmoidal firing rates remains an open challenge. However, recent techniques for dealing with a certain class of firing rate functions in one spatial dimension, which includes smooth firing rate functions connecting zero to one, are likely to be useful in this regard [34]. We have included an adaptive current in the standard Amari model here, but it would be informative to develop interface treatments for other forms of modulation, e.g., arising from threshold accommodation [35] or synaptic depression [5], as well as the inclusion of axonal delays [36]. These models can readily support spiral wave activity, and it would be interesting to see if an interface description, possibly adapting techniques by Hagberg and Meron [37], could shed light on their properties. Another possible extension of the work in this article, motivated by our numerical results for scattering spots, is to develop an interface theory of quasi-particle interactions along the lines for reaction-diffusion models described in [38, 39], using ideas developed by Bressloff [40] and Venkov [41] for weakly interacting systems in one spatial dimension. All of the above are topics of ongoing research and will be reported upon elsewhere.

Appendix: Numerical schemes

A.1 Fourier technique for neural field evolution

Because of its non-local character, the model described by (1), or its extension (61), is challenging to solve with conventional numerical methods. However, exploiting the convolution structure of (1) allows one to write the Fourier transform of R 2 d x w ( | x x | ) f ( x , t ) as a product. Here f ( x , t ) = H ( u ( x , t ) h ) and can be taken either as a Heaviside or a more general sigmoidal form. Introducing a spectral wave-vector k then this product is simply w ( | k | ) f ( k ) , where functions with arguments k denote two-dimensional spatial Fourier transforms. We may evaluate w ( | k | ) f ( k ) directly, at every time step, using fast Fourier transforms (FFTs). Note that w ( | k | ) can be pre-computed, by FFT or here even analytically, so that the procedure iterated over time amounts to computing f ( k ) by FFT, followed by a (complex) multiplication with w ( | k | ) , and finally an inverse FFT to obtain the result of the integral. We wish to employ a parallel compute cluster for rapid computation over large grids, and hence use the free software package FFTW 3.3 [42], which includes a parallel MPI-C version. Note that the use of Fourier methods implies that the discretization grid has periodic boundaries, or in other words, the solution is effectively computed on a torus. We use a grid spacing of about 0.03 or better in our computations here.

In order to compute the time evolution, we use DOPRI5 [43], a well-known implementation of an explicit Dormand-Prince (Runge-Kutta) method of order 5(4) with step size control and dense output of the order 4. A version in C due to J. Colinge is available on the web thanks to E. Hairer. However, in our case we perform parallel computations, so we have adapted this code accordingly using MPI-C. In particular, we now consider the maximum error across all compute nodes and all variables, rather than the mean error over local variables, and communicate the resulting time step adaptation over the cluster to achieve a unified evolution of the entire distributed grid. Numerical tolerances are set to 10 7 ( | y i | + 1 ) where y i represents all variables, i.e., u and potentially a at all grid points.

This numerical method is robust against effects of the underlying grid. This is due to the employed Fourier method, which performs the spatial convolution as a multiplication in Fourier space. The discrete Fourier transform used to transfer this calculation to Fourier space calculates a trigonometric interpolation polynomial, and the influence of the grid is effectively smoothed by implicit interpolation.

Computing an evolution as shown in Additional File 1 takes several hours on the 32 to 64 Infiniband-connected compute nodes we have typically employed, and yields many gigabytes of data. We note that computation with a sigmoidal firing rate instead of the Heaviside one is over an order of magnitude faster, reflecting the numerical difficulty of dealing with sharp edges.

A.2 Interface dynamics

Equations (22) and (28) can be used to develop a numerical scheme. The contour B is discretized into a set of points, and the normal vectors and the displacement vectors are found by computing the orientation and distance between points. Hence the computation of the contour integrals in (28) is straight-forward and yields the normal velocity, cf. (22), which is used to displace the points of the contour in the normal direction at every time step. We employed a simple Euler method to calculate the dynamics of the contour. As the contour grows/shrinks, additional points have to be created/eliminated along the contour.

This method does not provide any means to deal with the splitting or emergence of contours. It is faster than the Fourier technique (see Section A.1 in the Appendix) for small contours, yet the time to compute the normal velocity is proportional to N 2 (N being the number of points discretizing the contour), as opposed to M M for the Fourier technique (where M is the number of grid points). Hence it becomes slower for larger contours due to the absence of suitable spectral methods to compute the line integrals. The main advantage of this method is the fact that no underlying grid has to be deployed across the specified domain.

Electronic Supplementary Material

Author’s contributions

SC, HS and IB contributed equally. All authors read and approved the final manuscript.

Declarations

Authors’ Affiliations

(1)
School of Mathematical Sciences, University of Nottingham
(2)
School of Psychology (CN-CR), University of Birmingham, Edgbaston
(3)
Centre for Neuroscience, Donders Institute for Brain, Cognition and Behaviour

References

  1. Coombes S: Large-scale neural dynamics: simple and complex. NeuroImage 2010, 52: 731–739. 10.1016/j.neuroimage.2010.01.045View ArticleGoogle Scholar
  2. Liley DTJ, Foster BL, Bojak I: Co-operative populations of neurons: mean field models of mesoscopic brain activity. In Computational Systems Neurobiology. Edited by: Le Novère N. Dordrecht, Berlin; 2012:317–364.View ArticleGoogle Scholar
  3. Folias SE, Bressloff PC: Breathing pulses in an excitatory neural network. SIAM J Appl Dyn Syst 2004, 3: 378–407. 10.1137/030602629MathSciNetView ArticleGoogle Scholar
  4. Folias SE, Bressloff PC: Breathers in two-dimensional neural media. Phys Rev Lett 2005.,95(1–4): Article ID 208107 Article ID 208107Google Scholar
  5. Kilpatrick ZP, Bressloff PC: Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression. J Comput Neurosci 2010, 28: 193–209. 10.1007/s10827-009-0199-6MathSciNetView ArticleGoogle Scholar
  6. Owen MR, Laing CR, Coombes S: Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J Phys 2007., 9: Article ID 378 Article ID 378Google Scholar
  7. French DA: Identification of a free energy functional in an integro-differential equation model for neuronal network activity. Appl Math Lett 2004, 17: 1047–1051. 10.1016/j.aml.2004.07.007MathSciNetView ArticleGoogle Scholar
  8. Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A: Spontaneously emerging cortical representations of visual attributes. Nature 2003, 425: 954–956. 10.1038/nature02078View ArticleGoogle Scholar
  9. Sadaghiani S, Hesselmann G, Friston KJ, Kleinschmidt A: The relation of ongoing brain activity, evoked neural responses, and cognition. Front Syst Neurosci 2010., 4: Article ID 20 Article ID 20Google Scholar
  10. Amari S: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 1977, 27: 77–87. 10.1007/BF00337259MathSciNetView ArticleGoogle Scholar
  11. Pismen LM Springer Series in Synergetics. In Patterns and Interfaces in Dissipative Dynamics. Springer, Berlin; 2006.Google Scholar
  12. Goldstein RE, Muraki DJ, Petrich DM: Interface proliferation and the growth of labyrinths in a reaction-diffusion system. Phys Rev E 1996, 53: 4.MathSciNetView ArticleGoogle Scholar
  13. Goldstein RE: Nonlinear dynamics of pattern formation in physics and biology. In Pattern Formation in the Physical and Biological Sciences. Addison-Wesley, Reading; 1997:65–91.Google Scholar
  14. Muratov CB: Theory of domain patterns in systems with long-range interactions of Coulomb type. Phys Rev E 2002.,66(1–25): Article ID 066108 Article ID 066108Google Scholar
  15. Desai RC, Kapral R: Dynamics of Self-Organized and Self-Assembled Structures. Cambridge University Press, Cambridge; 2009.View ArticleGoogle Scholar
  16. Ermentrout GB, McLeod JB: Existence and uniqueness of travelling waves for a neural network. Proc R Soc Edinb, Sect A, Math 1993, 123: 461–478. 10.1017/S030821050002583XMathSciNetView ArticleGoogle Scholar
  17. Coombes S: Waves, bumps and patterns in neural field theories. Biol Cybern 2005, 93: 91–108. 10.1007/s00422-005-0574-yMathSciNetView ArticleGoogle Scholar
  18. Taylor JG: Neural ‘bubble’ dynamics in two dimensions: foundations. Biol Cybern 1999, 80: 393–409. 10.1007/s004220050534View ArticleGoogle Scholar
  19. Werner H, Richter T: Circular stationary solutions in two-dimensional neural fields. Biol Cybern 2001, 85: 211–217. 10.1007/s004220000237View ArticleGoogle Scholar
  20. Coombes S, Owen MR: Evans functions for integral neural field equations with Heaviside firing rate function. SIAM J Appl Dyn Syst 2004, 34: 574–600.MathSciNetView ArticleGoogle Scholar
  21. Laing CR, Troy WC: PDE methods for nonlocal models. SIAM J Appl Dyn Syst 2003, 2: 487–516. 10.1137/030600040MathSciNetView ArticleGoogle Scholar
  22. Pullin DI: Contour dynamics methods. Annu Rev Fluid Mech 1992, 24: 89–115. 10.1146/annurev.fl.24.010192.000513MathSciNetView ArticleGoogle Scholar
  23. Zabusky NJ, Hughes MH, Roberts KV: Contour dynamics for the Euler equations in two dimensions. J Comput Phys 1997, 135: 220–226. 10.1006/jcph.1997.5703MathSciNetView ArticleGoogle Scholar
  24. Watson GN: A Treatise on the Theory of Bessel Functions. Cambridge University Press, Cambridge; 2006.Google Scholar
  25. Bressloff PC, Kilpatrick ZP: Two-dimensional bumps in piecewise smooth neural fields with synaptic depression. SIAM J Appl Math 2011, 71: 379–408. 10.1137/100799423MathSciNetView ArticleGoogle Scholar
  26. Pinto DJ, Ermentrout GB: Spatially structured activity in synaptically coupled neuronal networks: I. Travelling fronts and pulses. SIAM J Appl Math 2001, 62: 206–225. 10.1137/S0036139900346453MathSciNetView ArticleGoogle Scholar
  27. Pismen LM: Nonlocal boundary dynamics of traveling spots in a reaction-diffusion system. Phys Rev Lett 2001, 86: 548–551. 10.1103/PhysRevLett.86.548View ArticleGoogle Scholar
  28. Lu Y, Amari S: Traveling bumps and their collisions in a two-dimensional neural field. Neural Comput 2011, 23: 1248–1260. 10.1162/NECO_a_00111MathSciNetView ArticleGoogle Scholar
  29. Coombes S, Owen MR: Exotic dynamics in a firing rate model of neural tissue with threshold accommodation. Contemp. Math. 440. In Fluids and Waves: Recent Trends in Applied Analysis. Am. Math. Soc., Providence; 2007:123–144.View ArticleGoogle Scholar
  30. Ben-Yishai R, Bar-Or L, Sompolinsky H: Theory of orientation tuning in visual cortex. Proc Natl Acad Sci USA 1995, 92: 3844–3848. 10.1073/pnas.92.9.3844View ArticleGoogle Scholar
  31. Bressloff PC, Cowan JD: Spherical model of orientation and spatial frequency tuning in a cortical hypercolumn. Philos Trans R Soc Lond B 2003, 358: 1643–1667. 10.1098/rstb.2002.1109View ArticleGoogle Scholar
  32. Faye G, Chossat P, Faugeras O: Analysis of a hyperbolic geometric model for visual texture perception. J Math Neurosci 2011., 1: Article ID 4 Article ID 4Google Scholar
  33. Petitot J: The neurogeometry of pinwheels as a sub-Riemannian contact structure. J Physiol 2003, 97: 265–309.Google Scholar
  34. Coombes S, Schmidt H: Neural fields with sigmoidal firing rates: approximate solutions. Discrete Contin Dyn Syst, Ser A 2010, 28: 1369–1379.MathSciNetView ArticleGoogle Scholar
  35. Coombes S, Owen MR: Bumps, breathers, and waves in a neural network with spike frequency adaptation. Phys Rev Lett 2005., 94: Article ID 148102 Article ID 148102Google Scholar
  36. Bojak I, Liley DTJ: Axonal velocity distributions in neural field equations. PLoS Comput Biol 2010., 6: Article ID e1000653 Article ID e1000653Google Scholar
  37. Hagberg A, Meron E: Order parameter equations for front transitions: nonuniformly curved fronts. Physica D 1998, 123: 460–473. 10.1016/S0167-2789(98)00143-2View ArticleGoogle Scholar
  38. Kawaguchi S, Mimura M: Collision of travelling waves in a reaction-diffusion system with global coupling effect. SIAM J Appl Math 1999, 59: 920–941.MathSciNetGoogle Scholar
  39. Ohta T: Pulse dynamics in a reaction-diffusion system. Physica D 2001, 151: 61–72. 10.1016/S0167-2789(00)00227-XMathSciNetView ArticleGoogle Scholar
  40. Bressloff PC: Weakly-interacting pulses in synaptically coupled neural media. SIAM J Appl Math 2005, 66: 57–81. 10.1137/040616371MathSciNetView ArticleGoogle Scholar
  41. Venkov NA: Dynamics of neural field models. PhD thesis. School of Mathematical Sciences, University of Nottingham; 2009. [http://www.umnaglava.org/pdfs.html] Venkov NA: Dynamics of neural field models. PhD thesis. School of Mathematical Sciences, University of Nottingham; 2009. [http://www.umnaglava.org/pdfs.html]Google Scholar
  42. Frigo M, Johnson SG: The design and implementation of FFTW3. Proc IEEE 2005,93(2):216–231.View ArticleGoogle Scholar
  43. Hairer E, Norsett SP, Wanner G: Solving Ordinary Differential Equations I: Nonstiff Problems. 2nd edition. Springer, Berlin; 1993.Google Scholar

Copyright

© Coombes et al.; licensee Springer 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.