# Interface dynamics in planar neural field models

- Stephen Coombes
^{1}Email author, - Helmut Schmidt
^{1}and - Ingo Bojak
^{2, 3}

**2**:9

**DOI: **10.1186/2190-8567-2-9

© Coombes et al.; licensee Springer 2012

**Received: **21 November 2011

**Accepted: **13 February 2012

**Published: **2 May 2012

## Abstract

Neural field models describe the coarse-grained activity of populations of interacting neurons. Because of the laminar structure of real cortical tissue they are often studied in two spatial dimensions, where they are well known to generate rich patterns of spatiotemporal activity. Such patterns have been interpreted in a variety of contexts ranging from the understanding of visual hallucinations to the generation of electroencephalographic signals. Typical patterns include localized solutions in the form of traveling spots, as well as intricate labyrinthine structures. These patterns are naturally defined by the interface between low and high states of neural activity. Here we derive the equations of motion for such interfaces and show, for a Heaviside firing rate, that the normal velocity of an interface is given in terms of a non-local Biot-Savart type interaction over the boundaries of the high activity regions. This exact, but dimensionally reduced, system of equations is solved numerically and shown to be in excellent agreement with the full nonlinear integral equation defining the neural field. We develop a linear stability analysis for the interface dynamics that allows us to understand the mechanisms of pattern formation that arise from instabilities of spots, rings, stripes and fronts. We further show how to analyze neural field models with linear adaptation currents, and determine the conditions for the dynamic instability of spots that can give rise to breathers and traveling waves.

## 1 Introduction

^{2}of human cortex contain 10

^{5}to 10

^{6}macrocolumns, comprising about 10

^{5}neurons each. Neural field models describe the mean activity of such columns by approximating the cortical sheet as a continuous excitable medium. They can generate rich patterns of emergent spatiotemporal activity and have been used to understand visual hallucinations, mechanisms for short term working memory, motion perception, the generation of electroencephalographic signals and many other neural phenomena. We refer the reader to [1, 2] for recent discussions of neural field models and their uses, and in particular to the work of Bressloff and colleagues [3–5] and Owen et al. [6] for results on planar systems. A minimal two-dimensional neural field model can be written as an integro-differential equation of the form

*u*represents synaptic activity and the kernel

*w*represents anatomical connectivity. The nonlinear function

*H*represents the firing rate of the tissue and will be taken to be a Heaviside so that the parameter

*h*is interpreted as a firing threshold. For the case of a symmetric synaptic kernel $w(\mathbf{x})=w(|\mathbf{x}|)$, the model also has a Liapunov function [6, 7] given by

which can be useful in determining the stability of equilibrium solutions.

*w*. For further details see the discussion around Equation (20) and Section A.1 in the Appendix (for the numerical scheme). Here Equation (1) describes a single population model with short-range excitation and long-range inhibition. This minimal example nicely illustrates the ability of neural field models to generate intricate spreading labyrinthine patterns. We do not expect to find labyrinthine patterns as such in real brain activity. However, they provide a convenient (and visually striking) proxy for the generation of complex patterns of activity, that emerge spontaneously and/or can be evoked, for example in visual cortex [8]. Labyrinthine patterns are also seen when the Heaviside firing rate function is replaced by a steep sigmoid, as will be discussed later. Visual inspection suggests that much of the behavior of such patterns can be described simply by tracking the boundary between high and low states of activity. Indeed this appears to resonate with neuroscientific practice, where changes of brain activity are often of greater interest than the current brain state

*per se*[9]. Hence it is of interest whether the dynamics of (1) can be replaced by a lower dimensional description that evolves the boundary between high and low states of activity. This programme has already been developed by Amari in his seminal article on one-dimensional models [10], where this interface reduces naturally to a point (or a set of points). However, in two spatial dimensions the interface is more naturally a closed curve (or a set of closed curves).

The main topic of this article is the development of an equivalent interface description for neural field models of the type exemplified by (1). We show that activity patterns can be described by dynamical equations of reduced dimension, and that these depend only on the shape of the interface (requiring no knowledge of activity away from the interface). Not only is this description amenable to fast numerical simulation strategies, it allows for the construction of localized states and an analysis of their linear stability. Given the computational overheads in simulating the full neural field model this enhances our ability to study pattern formation and suggests more generally that modeling the interfaces of patterns, rather than the patterns themselves, may lead to novel, efficient descriptions of brain activity. Indeed the use of interface dynamics to analyze patterns that arise in partial differential equation models of chemical and physical systems has a strong history [11], and it is natural to translate some of the ideas and technologies from these studies to non-local neural field models. The work by Goldstein [12, 13] and Muratov [14] on pattern formation in two-dimensional excitable reaction-diffusion systems is especially relevant in this context, as both authors have developed effective descriptions of interface dynamics in terms of non-local interactions. See also the book by Desai and Kapral [15] for a recent overview.

It is worth pointing out that whether computing interface dynamics can compete with other numerical schemes will depend on the problem at hand. In general, boundaries that remain relatively short and do not pinch guarantee a speed advantage. In practice, we expect this approach to be especially relevant for (semi-) analytical work aiming at qualitative understanding, as illustrated by some of the examples presented in this article.

In Section 2 we present some of the key ideas behind an interface dynamics in the setting of a one-dimensional neural field model. This is particularly useful for introducing the definition of normal velocity from a level-set condition, as well as establishing what it means for an interface to be linearly stable. The extension of these ideas to two-dimensional systems is presented in Section 3. By writing the synaptic connectivity in terms of a linear combination of Bessel functions, we show that dynamics for the interface can be constructed in terms of line-integrals along the interface, and that the normal velocity of the interface is driven by Biot-Savart-style interactions. Thus we obtain a reduced description for the evolution of a pattern boundary solely in terms of quantities on the boundary itself. Numerical simulations of the interface dynamics are shown to be in direct correspondence with those of the full neural field model. The notion of linear stability of stationary solutions in the interface framework is fleshed out in a series of examples (for spots, rings, stripes and fronts) in Sections 4 and 5, and allows us to understand some of the mechanisms for pattern formation. In Section 6 we add linear adaptation to (1) and extend our analysis to cover this important neural phenomenon. This can introduce dynamic instabilities of stationary structures, and we calculate where breathing and drift instabilities for localized spots occur. Moreover, we use a perturbation argument to determine the shape of traveling spots that emerge beyond a drift instability and show that spots contract in the direction of propagation and widen in the orthogonal direction. Finally, in Section 7 we discuss extensions of the work in this article.

## 2 A one-dimensional primer

*u*is above or below the firing threshold, we shall take the constant on the right hand side of (4) to be

*h*(though other choices are also possible). Differentiation of (4) gives an exact expression for the velocity of the interface in the form

*x*gives

*t*(and dropping transients) yields

*u*. Introducing the notation $\stackrel{\u02c6}{\cdot}$ to denote perturbed quantities, to a first approximation we will set ${\stackrel{\u02c6}{u}}_{x}{|}_{x={\stackrel{\u02c6}{x}}_{0}(t)}={u}_{x}{|}_{x=ct}$, and write ${\stackrel{\u02c6}{x}}_{0}(t)=ct+\delta {x}_{0}(t)$. The perturbation in

*u*can be related to the perturbation in the interface by noting that both the perturbed and unperturbed boundaries are defined by the level set condition, so that $u({x}_{0},t)=h=\stackrel{\u02c6}{u}({\stackrel{\u02c6}{x}}_{0},t)$. Introducing $\delta u(t)=u{|}_{x=ct}-\stackrel{\u02c6}{u}{|}_{x={\stackrel{\u02c6}{x}}_{0}(t)}$, we thus have the condition that $\delta u(t)=0$ for all

*t*. Integrating (6) and dropping transients gives

*δu*is given (to first order in $\delta {x}_{0}$) by

*λ*is defined by $\mathcal{E}(\lambda )=0$, with

A front is stable if $Re\lambda <0$.

The equation $\mathcal{E}(\lambda )=0$ only has the solution $\lambda =0$. We also have that ${\mathcal{E}}^{\prime}(\lambda )>0$, showing that $\lambda =0$ is a simple eigenvalue. Hence, the traveling wave front for this example is neutrally stable.

Given this preliminary exposition of interface dynamics we are now ready to describe the extension to two dimensions and to address the additional challenges that working in the plane gives rise to.

## 3 Interface dynamics in two dimensions

which is shown for $\beta =0.3$ and $\gamma =3$ in Figure 2B.

**r**is a point on the domain boundary $\partial \mathcal{B}$ and ${u}_{t}$ and ${\mathrm{\nabla}}_{\mathbf{x}}u$ are evaluated on the boundary. Introducing the normal vector along the contour $\partial \mathcal{B}$ as $\mathbf{n}=-{\mathrm{\nabla}}_{\mathbf{x}}u/|{\mathrm{\nabla}}_{\mathbf{x}}u|$ allows us to obtain the normal velocity along the contour:

*u*and

*z*satisfy

From the form of (22), (23), and (24), we see that the evolution of the interface does not require any knowledge of the neural field away from the contour, and rather just depends on the shape of the sets where the field is above threshold. We now exploit the choice of ${K}_{0}$ as basis function for constructing the synaptic kernel to show how the double integrals in (23) and (24) can be reduced to line integrals. This yields an elegant description of the interface dynamics that emphasizes how the geometry of $\partial \mathcal{B}$ drives the evolution of spatiotemporal patterns. The key step in this reformulation is the use of Green’s identity. For a two-dimensional vector field **F** this identity is the two-dimensional version of the divergence theorem, which we write symbolically as ${\int}_{\mathcal{B}}\mathrm{\nabla}\cdot \mathbf{F}={\oint}_{\partial \mathcal{B}}\mathbf{F}\cdot \mathbf{n}$. Using this first identity we may generate a second for a scalar field Ψ as ${\int}_{\mathcal{B}}\mathrm{\nabla}\mathrm{\Psi}={\oint}_{\partial \mathcal{B}}\mathbf{n}\mathrm{\Psi}$.

**x**is within $\mathcal{B}$ and $C=0$ if

**x**is outside $\mathcal{B}$. If

**x**is on the boundary of $\mathcal{B}$ then $C=1/2$. Hence, for points on the boundary parametrized by

*s*one finds

Note that the choice of ${K}_{0}$ as a basis for *w* is merely a convenience to allow explicit calculations. As long as we can write the connectivity function *w* as the divergence of a vector field then we can exploit Green’s first identity to turn the right hand side of (23) into a line integral.

From the Biot-Savart form of (29) we see that for every part *i* of the synaptic kernel there is an effective repulsion between two arc length positions with anti-parallel tangent vectors, although the combined effect when including all *N* terms will depend on the choice of the amplitudes ${A}_{i}$. Now with (22), (27), and (28) the normal velocity on the interface can be written solely in terms of certain line-integrals around the interface. From a computational perspective this leads to a substantial advantage in that one no longer needs to solve the full non-local neural field model (17) across the entire plane, and can instead simply evolve the interface in time by discretizing the boundary and translating the points with the normal velocity from (22) in the direction of **n**. One possible practical disadvantage of this is the need to monitor for possible self-intersections of the evolving boundary, *splitting*, where a connected region pinches off into two or more disconnected regions, or indeed the creation of new boundaries where none existed before. However, numerical schemes for coping with similar situations in fluid models are well developed in the literature and it is natural to turn to these for more refined numerical schemes and ones that can automate the process of contour surgery [22, 23]. In Figure 1B we illustrate the simple numerical implementation of the interface dynamics described in Section A.2 in the Appendix, showing the effectiveness of the dimensionally reduced system at capturing the spatiotemporal pattern formation of the full model shown in Figure 1A.

*σ*reflects the expected width of the distribution of firing thresholds around a mean

*h*in the neural population, with the Heaviside case corresponding to $\sigma =0$. Figure 3 demonstrates that for these steep sigmoids very similar labyrinthine shapes arise, and closer inspection reveals that the main differences occur at the rapidly developing rim of the structure, whereas the settled interior is nearly identical. Thus a simple adjustment of the time constant

*α*will in this case provide a near perfect match of the emerging structures. In Figure 3 we demonstrate this with the dashed and dotted red lines, which represent the Heaviside Liapunov function computed over longer time scales (up to $\alpha \cdot t=569.9\text{and}626.8$, respectively) and then scaled down to $\alpha \cdot t\le 550$ by adjusting

*α*. A very close match to the sigmoidal Liapunov curves (green and blue lines) is then obtained. However, for broader sigmoids we find labyrinths still resembling the Heaviside one, but with more obvious spatial changes. The video in Additional file 3 shows the $\sigma =0.03$ case as an example. It would seem that mild deviations in the shape of the firing rate from Heaviside (to a steep sigmoidal form) are reflected more in temporal speed than in spatial shape changes.

**n**by an anti-clockwise rotation of $\pi /2$ so that

and the observation that $\mathbf{n}(s)\cdot \mathbf{n}({s}^{\prime})=\mathbf{t}(s)\cdot \mathbf{t}({s}^{\prime})$.

**r**is on the boundary parametrized by

*s*. We use the notation ${\mathcal{B}}_{0}$ to denote a stationary active region. Given the stationary interface, we can also calculate the stationary field

*u*everywhere (away from the interface) using (17) as

which can also be evaluated as a line integral. In order to analyze the stability of stationary solutions in the original neural field formalism defined by (1) one would perturb the field variable *u* and linearize to derive an eigenvalue equation or Evans function [20]. Here we determine stability using the interface dynamics, generalizing the approach described in Section 2.

The dynamics for $\stackrel{\u02c6}{u}$ is given by (23) with $\mathcal{B}$ replaced by $\stackrel{\u02c6}{\mathcal{B}}$. The perturbation affects the normal vector $\mathbf{n}(s)$ as well as the displacement vector $\mathbf{r}(s)-\mathbf{r}({s}^{\prime})$ that occurs in (27). Thus to evaluate (35) it is necessary to linearize ${K}_{1}$ about the unperturbed contour. In the case of interfaces without curvature the linear contribution to ${K}_{1}$ is zero. In contrast for curved interfaces an addition theorem for Bessel functions shows that there is a non-zero contribution. To clarify this statement and show how the above machinery is used in practice, we now give some explicit examples of localized solutions and their stability.

## 4 Localized states: spots

*azimuthal*instabilities, as already found in [6]. In order to obtain circular solutions we use the standard parametrization of a circle for the contour and write

*R*in the form

*ν*. A plot of the spot radius

*R*as a function of threshold

*h*is shown in Figure 4.

*u*(dropping transients) can be written as

*R*

*ψ*is conveniently written as

*ψ*may be constructed explicitly (off the boundary), using similar line integral calculations to those for existence (above), and is given by

*m*instability will occur if ${\lambda}_{m}>0$, which recovers the result in [6] obtained using an Evans function approach. The possibility of such azimuthal instabilities is indicated on the solution branches shown in Figure 4 (and we would expect the emergence of solution branches with ${D}_{m}$ symmetry from the points marked by

*m*). Interestingly we can see from (44) and (45) that the mode with $m=1$ is neutrally stable. For a perturbation to a circular boundary of the form $\delta R(\theta ,t)={\u03f5}_{m}(t)cos(m\theta )$, ${\u03f5}_{m}=\u03f5{\mathrm{e}}^{{\lambda}_{m}t}$ and $\u03f5\ll 1$, the perturbation of the normal velocity ${v}_{n}$ is

The zeros of the first derivative of ${E}_{\mathrm{L}\mathrm{i}\mathrm{a}\mathrm{p}.}$ with respect to *R* give the stationary circular solutions, including the trivial case $R=0$, as expected.

## 5 Rings, fronts and stripes

In this section we show how to treat other simple interface shapes, namely rings, fronts and stripes, and determine their stability. We recover previous results in [6] for rings (obtained with an Evans function method), whilst calculations for the other structures are shown to be straight-forward using the interface dynamics approach.

### 5.1 Rings

*a*. For the inner contour we similarly write ${R}_{1}(\theta )={R}_{1}+b{\mathrm{e}}^{\lambda t}cosm\theta $. We now generate $\delta u(t)$ on each of the two boundaries and equate these to zero to generate two equations for the pair of unknown amplitudes $(a,b)$. Demanding that this pair of equations has a non-trivial solution generates an equation for

*λ*in the form ${\mathcal{E}}_{m}(\lambda )=0$ where ${\mathcal{E}}_{m}(\lambda )=|(1+\lambda ){I}_{2}-{\mathcal{A}}_{m}(\lambda )|$ and

for $\mu ,\nu =1,2$.

*m*spots. In Figure 5 we plot solution branches for ring solutions as a function of

*h*for the Mexican-hat model defined by (20), and flag the types of instability that can occur. Of the two solution branches the lower one is unstable with respect to radial perturbations, whereas the upper branch is subject to azimuthal destabilizations. In Figure 6 we show a two-dimensional plot of an unstable ring solution, and the emergent structure of five bumps seen beyond instability, consistent with the predictions of our linear stability analysis.

### 5.2 Fronts

*h*lies exactly halfway between the two possible steady states of

*u*. To investigate the properties of a planar traveling front of speed

*c*it is informative to treat the simple case $w(x)={K}_{0}(x)/(2\pi )$. We then have that ${u}_{t}=-h+1/2$ and

To determine stability we consider a front along $y=0$ and write the perturbed front as $\stackrel{\u02c6}{y}=\stackrel{\u02c6}{y}(x,t)$.

*x*) has solutions of the form $\stackrel{\u02c6}{y}=\u03f5cos(kx){\mathrm{e}}^{\lambda t}$, where

### 5.3 Stripes

*D*, such that ${y}_{2}-{y}_{1}=D$ for all

*x*, the existence condition $u(x,{y}_{1})=h=u(x,{y}_{1}+D)$ takes the simple form

## 6 Neural field models with linear adaptation

*ψ*is the second term on the right hand side of (3) and (1) in one and two dimensions, respectively. The linearity of the equations of motion means that we may obtain the trajectory for $(u,a)$ in closed form as

which has zeros when $\lambda =0$ and $\lambda ={k}_{+}{k}_{-}-({k}_{+}+{k}_{-})=\alpha g-1$. Hence, the stationary front changes from stable to unstable as *α* is increased through ${\alpha}_{c}=1/g$.

*R*. This radius is determined by (38) under the replacement $h\to h(1+g)$, so that

*ψ*may be constructed explicitly off the boundary, and is given by Equation (41), so that $u(r)=\psi (r)/(1+g)$. A saddle-node bifurcation of stationary spots occurs at $R={R}_{c}$ where ${F}^{\prime}({R}_{c})=0$. Hence, in the $(h,g)$ plane stationary solutions only exist for $h<F({R}_{c})/(1+g)$. Under variation in

*α*we expect the emergence of a drifting spot. Beyond a drift instability, we expect to be able to find traveling spots that move in some direction

**c**with constant speed $c=|\mathbf{c}|$. These can be constructed as stationary solutions in a co-moving frame $\xi =\mathbf{x}+\mathbf{c}t$, and satisfy

**c**and

**t**(and using $\mathbf{n}\times \mathbf{t}=1$) shows that ${c}_{n}=\mathbf{c}\times \mathbf{t}$. Hence, the condition for stationary propagation, with $\mathbf{c}=\dot{\mathbf{r}}$, is

*ψ*is rotationally symmetric means that we may construct a solution in the form

*η*is easily calculated as

*g*increase through $1/\alpha $. It is also possible that a breathing instability may arise for the mode with $m=0$. Note that another way to generate breathing solutions is to include localized inputs [3, 4], breaking the homogeneous structure of the network. Substitution of $\lambda =i\omega $ into (75) gives the condition for this instability as:

*c*discussion we Laplace transform (74) in the ${\xi}_{1}$ variable to obtain

*c*. Thus not only is there a breathing bifurcation at $g=1/\alpha $, but also a drifting instability to a traveling spot whose shape, determined from (79) by $u(\mathbf{r})=h$, can be written in the form $\mathbf{r}(\theta )=R(\theta )(cos\theta ,sin\theta )$ with

Here *R* is determined by (71). A further weakly nonlinear analysis to understand the competition between drifting and breathing at $g=1/\alpha $ is beyond the scope of this article.

*c*in $u(\mathbf{r})=h$ using (79) and (80). However, it is not our intention to pursue these lengthy calculations here. Rather to give a feel of the shape of a traveling spot we plot the level set where $u({\xi}_{1},{\xi}_{2})=h$ using (79) in Figure 11D including terms up to ${c}^{3}$. This nicely illustrates that spots contract in the direction of propagation and widen in the orthogonal direction, and provides a theoretical explanation for the shape of traveling spots recently reported in [28]. With the aid of direct numerical simulations we have also explored the scattering properties of traveling spots. In common with previous numerical studies of planar neural fields with some form of adaptation, we find that such structures can behave as quasi-particles in the sense that they can scatter like dissipative solitons [29]. An example of such scattering is shown in Figure 11. Here we see a repulsive interaction which repels the spots away from each other if they approach too closely.

## 7 Discussion

In this article we have formulated an interface dynamics for planar neural fields with a Heaviside firing rate. This has allowed us to (i) develop an economical computational framework for the evolution of spatiotemporal patterns, and (ii) perform linear stability analyses of localized structures. For simplicity we have focused on single population models. However, the extension to population models that treat the dynamics of both excitatory and inhibitory populations is straightforward. Perhaps a more interesting extension is to consider neural field models that incorporate feature selectivity such as that observed in visual cortex for orientation [30], spatial frequency [31] and texture [32]. Denoting this feature label by *χ* then all of these models are expressed in terms of some non-local integro-differential equation for $u(\mathbf{r},\chi ,t)$. We note that the notion of an interface is still well defined and that the level set condition $u(\mathbf{r},\chi ,t)=h$ gives a constraint between local geometrical data and features. As an alternative to simulating the neural field models an interface approach (incorporating feature space) may be more useful for understanding how local data can be integrated into global geometrical structures, as advocated in the neurogeometry framework of Petitot [33] (say for understanding models of contour completion in models of primary visual cortex where the feature space is orientation). The extension of this work to treat sigmoidal firing rates remains an open challenge. However, recent techniques for dealing with a certain class of firing rate functions in one spatial dimension, which includes smooth firing rate functions connecting zero to one, are likely to be useful in this regard [34]. We have included an adaptive current in the standard Amari model here, but it would be informative to develop interface treatments for other forms of modulation, e.g., arising from threshold accommodation [35] or synaptic depression [5], as well as the inclusion of axonal delays [36]. These models can readily support spiral wave activity, and it would be interesting to see if an interface description, possibly adapting techniques by Hagberg and Meron [37], could shed light on their properties. Another possible extension of the work in this article, motivated by our numerical results for scattering spots, is to develop an interface theory of quasi-particle interactions along the lines for reaction-diffusion models described in [38, 39], using ideas developed by Bressloff [40] and Venkov [41] for weakly interacting systems in one spatial dimension. All of the above are topics of ongoing research and will be reported upon elsewhere.

## Appendix: Numerical schemes

### A.1 Fourier technique for neural field evolution

Because of its non-local character, the model described by (1), or its extension (61), is challenging to solve with conventional numerical methods. However, exploiting the convolution structure of (1) allows one to write the Fourier transform of ${\int}_{{\mathbb{R}}^{2}}\mathrm{d}{\mathbf{x}}^{\prime}w(|\mathbf{x}-{\mathbf{x}}^{\prime}|)f({\mathbf{x}}^{\prime},t)$ as a product. Here $f(\mathbf{x},t)=H(u(\mathbf{x},t)-h)$ and can be taken either as a Heaviside or a more general sigmoidal form. Introducing a spectral wave-vector **k** then this product is simply $w(|\mathbf{k}|)f(\mathbf{k})$, where functions with arguments **k** denote two-dimensional spatial Fourier transforms. We may evaluate $w(|\mathbf{k}|)f(\mathbf{k})$ directly, at every time step, using fast Fourier transforms (FFTs). Note that $w(|\mathbf{k}|)$ can be pre-computed, by FFT or here even analytically, so that the procedure iterated over time amounts to computing $f(\mathbf{k})$ by FFT, followed by a (complex) multiplication with $w(|\mathbf{k}|)$, and finally an inverse FFT to obtain the result of the integral. We wish to employ a parallel compute cluster for rapid computation over large grids, and hence use the free software package FFTW 3.3 [42], which includes a parallel MPI-C version. Note that the use of Fourier methods implies that the discretization grid has periodic boundaries, or in other words, the solution is effectively computed on a torus. We use a grid spacing of about 0.03 or better in our computations here.

In order to compute the time evolution, we use DOPRI5 [43], a well-known implementation of an explicit Dormand-Prince (Runge-Kutta) method of order 5(4) with step size control and dense output of the order 4. A version in C due to J. Colinge is available on the web thanks to E. Hairer. However, in our case we perform parallel computations, so we have adapted this code accordingly using MPI-C. In particular, we now consider the maximum error across all compute nodes and all variables, rather than the mean error over local variables, and communicate the resulting time step adaptation over the cluster to achieve a unified evolution of the entire distributed grid. Numerical tolerances are set to ${10}^{-7}(|{y}_{i}|+1)$ where ${y}_{i}$ represents all variables, i.e., *u* and potentially *a* at all grid points.

This numerical method is robust against effects of the underlying grid. This is due to the employed Fourier method, which performs the spatial convolution as a multiplication in Fourier space. The discrete Fourier transform used to transfer this calculation to Fourier space calculates a trigonometric interpolation polynomial, and the influence of the grid is effectively smoothed by implicit interpolation.

Computing an evolution as shown in Additional File 1 takes several hours on the 32 to 64 Infiniband-connected compute nodes we have typically employed, and yields many gigabytes of data. We note that computation with a sigmoidal firing rate instead of the Heaviside one is over an order of magnitude faster, reflecting the numerical difficulty of dealing with sharp edges.

### A.2 Interface dynamics

Equations (22) and (28) can be used to develop a numerical scheme. The contour $\partial \mathcal{B}$ is discretized into a set of points, and the normal vectors and the displacement vectors are found by computing the orientation and distance between points. Hence the computation of the contour integrals in (28) is straight-forward and yields the normal velocity, cf. (22), which is used to displace the points of the contour in the normal direction at every time step. We employed a simple Euler method to calculate the dynamics of the contour. As the contour grows/shrinks, additional points have to be created/eliminated along the contour.

This method does not provide any means to deal with the splitting or emergence of contours. It is faster than the Fourier technique (see Section A.1 in the Appendix) for small contours, yet the time to compute the normal velocity is proportional to ${N}^{2}$ (*N* being the number of points discretizing the contour), as opposed to $M\sqrt{M}$ for the Fourier technique (where *M* is the number of grid points). Hence it becomes slower for larger contours due to the absence of suitable spectral methods to compute the line integrals. The main advantage of this method is the fact that no underlying grid has to be deployed across the specified domain.

## Electronic Supplementary Material

## Author’s contributions

SC, HS and IB contributed equally. All authors read and approved the final manuscript.

## Declarations

## Authors’ Affiliations

## References

- Coombes S:
**Large-scale neural dynamics: simple and complex.***NeuroImage*2010,**52:**731–739. 10.1016/j.neuroimage.2010.01.045View ArticleGoogle Scholar - Liley DTJ, Foster BL, Bojak I:
**Co-operative populations of neurons: mean field models of mesoscopic brain activity.**In*Computational Systems Neurobiology*. Edited by: Le Novère N. Dordrecht, Berlin; 2012:317–364.View ArticleGoogle Scholar - Folias SE, Bressloff PC:
**Breathing pulses in an excitatory neural network.***SIAM J Appl Dyn Syst*2004,**3:**378–407. 10.1137/030602629MathSciNetView ArticleGoogle Scholar - Folias SE, Bressloff PC:
**Breathers in two-dimensional neural media.***Phys Rev Lett*2005.,**95**(1–4): Article ID 208107 Article ID 208107 - Kilpatrick ZP, Bressloff PC:
**Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression.***J Comput Neurosci*2010,**28:**193–209. 10.1007/s10827-009-0199-6MathSciNetView ArticleGoogle Scholar - Owen MR, Laing CR, Coombes S:
**Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities.***New J Phys*2007.,**9:**Article ID 378 Article ID 378Google Scholar - French DA:
**Identification of a free energy functional in an integro-differential equation model for neuronal network activity.***Appl Math Lett*2004,**17:**1047–1051. 10.1016/j.aml.2004.07.007MathSciNetView ArticleGoogle Scholar - Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A:
**Spontaneously emerging cortical representations of visual attributes.***Nature*2003,**425:**954–956. 10.1038/nature02078View ArticleGoogle Scholar - Sadaghiani S, Hesselmann G, Friston KJ, Kleinschmidt A:
**The relation of ongoing brain activity, evoked neural responses, and cognition.***Front Syst Neurosci*2010.,**4:**Article ID 20 Article ID 20Google Scholar - Amari S:
**Dynamics of pattern formation in lateral-inhibition type neural fields.***Biol Cybern*1977,**27:**77–87. 10.1007/BF00337259MathSciNetView ArticleGoogle Scholar - Pismen LM
**Springer Series in Synergetics.**In*Patterns and Interfaces in Dissipative Dynamics*. Springer, Berlin; 2006.Google Scholar - Goldstein RE, Muraki DJ, Petrich DM:
**Interface proliferation and the growth of labyrinths in a reaction-diffusion system.***Phys Rev E*1996,**53:**4.MathSciNetView ArticleGoogle Scholar - Goldstein RE:
**Nonlinear dynamics of pattern formation in physics and biology.**In*Pattern Formation in the Physical and Biological Sciences*. Addison-Wesley, Reading; 1997:65–91.Google Scholar - Muratov CB:
**Theory of domain patterns in systems with long-range interactions of Coulomb type.***Phys Rev E*2002.,**66**(1–25): Article ID 066108 Article ID 066108 - Desai RC, Kapral R:
*Dynamics of Self-Organized and Self-Assembled Structures*. Cambridge University Press, Cambridge; 2009.View ArticleGoogle Scholar - Ermentrout GB, McLeod JB:
**Existence and uniqueness of travelling waves for a neural network.***Proc R Soc Edinb, Sect A, Math*1993,**123:**461–478. 10.1017/S030821050002583XMathSciNetView ArticleGoogle Scholar - Coombes S:
**Waves, bumps and patterns in neural field theories.***Biol Cybern*2005,**93:**91–108. 10.1007/s00422-005-0574-yMathSciNetView ArticleGoogle Scholar - Taylor JG:
**Neural ‘bubble’ dynamics in two dimensions: foundations.***Biol Cybern*1999,**80:**393–409. 10.1007/s004220050534View ArticleGoogle Scholar - Werner H, Richter T:
**Circular stationary solutions in two-dimensional neural fields.***Biol Cybern*2001,**85:**211–217. 10.1007/s004220000237View ArticleGoogle Scholar - Coombes S, Owen MR:
**Evans functions for integral neural field equations with Heaviside firing rate function.***SIAM J Appl Dyn Syst*2004,**34:**574–600.MathSciNetView ArticleGoogle Scholar - Laing CR, Troy WC:
**PDE methods for nonlocal models.***SIAM J Appl Dyn Syst*2003,**2:**487–516. 10.1137/030600040MathSciNetView ArticleGoogle Scholar - Pullin DI:
**Contour dynamics methods.***Annu Rev Fluid Mech*1992,**24:**89–115. 10.1146/annurev.fl.24.010192.000513MathSciNetView ArticleGoogle Scholar - Zabusky NJ, Hughes MH, Roberts KV:
**Contour dynamics for the Euler equations in two dimensions.***J Comput Phys*1997,**135:**220–226. 10.1006/jcph.1997.5703MathSciNetView ArticleGoogle Scholar - Watson GN:
*A Treatise on the Theory of Bessel Functions*. Cambridge University Press, Cambridge; 2006.Google Scholar - Bressloff PC, Kilpatrick ZP:
**Two-dimensional bumps in piecewise smooth neural fields with synaptic depression.***SIAM J Appl Math*2011,**71:**379–408. 10.1137/100799423MathSciNetView ArticleGoogle Scholar - Pinto DJ, Ermentrout GB:
**Spatially structured activity in synaptically coupled neuronal networks: I. Travelling fronts and pulses.***SIAM J Appl Math*2001,**62:**206–225. 10.1137/S0036139900346453MathSciNetView ArticleGoogle Scholar - Pismen LM:
**Nonlocal boundary dynamics of traveling spots in a reaction-diffusion system.***Phys Rev Lett*2001,**86:**548–551. 10.1103/PhysRevLett.86.548View ArticleGoogle Scholar - Lu Y, Amari S:
**Traveling bumps and their collisions in a two-dimensional neural field.***Neural Comput*2011,**23:**1248–1260. 10.1162/NECO_a_00111MathSciNetView ArticleGoogle Scholar - Coombes S, Owen MR:
**Exotic dynamics in a firing rate model of neural tissue with threshold accommodation. Contemp. Math. 440.**In*Fluids and Waves: Recent Trends in Applied Analysis*. Am. Math. Soc., Providence; 2007:123–144.View ArticleGoogle Scholar - Ben-Yishai R, Bar-Or L, Sompolinsky H:
**Theory of orientation tuning in visual cortex.***Proc Natl Acad Sci USA*1995,**92:**3844–3848. 10.1073/pnas.92.9.3844View ArticleGoogle Scholar - Bressloff PC, Cowan JD:
**Spherical model of orientation and spatial frequency tuning in a cortical hypercolumn.***Philos Trans R Soc Lond B*2003,**358:**1643–1667. 10.1098/rstb.2002.1109View ArticleGoogle Scholar - Faye G, Chossat P, Faugeras O:
**Analysis of a hyperbolic geometric model for visual texture perception.***J Math Neurosci*2011.,**1:**Article ID 4 Article ID 4Google Scholar - Petitot J:
**The neurogeometry of pinwheels as a sub-Riemannian contact structure.***J Physiol*2003,**97:**265–309.Google Scholar - Coombes S, Schmidt H:
**Neural fields with sigmoidal firing rates: approximate solutions.***Discrete Contin Dyn Syst, Ser A*2010,**28:**1369–1379.MathSciNetView ArticleGoogle Scholar - Coombes S, Owen MR:
**Bumps, breathers, and waves in a neural network with spike frequency adaptation.***Phys Rev Lett*2005.,**94:**Article ID 148102 Article ID 148102Google Scholar - Bojak I, Liley DTJ:
**Axonal velocity distributions in neural field equations.***PLoS Comput Biol*2010.,**6:**Article ID e1000653 Article ID e1000653Google Scholar - Hagberg A, Meron E:
**Order parameter equations for front transitions: nonuniformly curved fronts.***Physica D*1998,**123:**460–473. 10.1016/S0167-2789(98)00143-2View ArticleGoogle Scholar - Kawaguchi S, Mimura M:
**Collision of travelling waves in a reaction-diffusion system with global coupling effect.***SIAM J Appl Math*1999,**59:**920–941.MathSciNetGoogle Scholar - Ohta T:
**Pulse dynamics in a reaction-diffusion system.***Physica D*2001,**151:**61–72. 10.1016/S0167-2789(00)00227-XMathSciNetView ArticleGoogle Scholar - Bressloff PC:
**Weakly-interacting pulses in synaptically coupled neural media.***SIAM J Appl Math*2005,**66:**57–81. 10.1137/040616371MathSciNetView ArticleGoogle Scholar - Venkov NA: Dynamics of neural field models. PhD thesis. School of Mathematical Sciences, University of Nottingham; 2009. [http://www.umnaglava.org/pdfs.html] Venkov NA: Dynamics of neural field models. PhD thesis. School of Mathematical Sciences, University of Nottingham; 2009. [http://www.umnaglava.org/pdfs.html]
- Frigo M, Johnson SG:
**The design and implementation of FFTW3.***Proc IEEE*2005,**93**(2):216–231.View ArticleGoogle Scholar - Hairer E, Norsett SP, Wanner G:
*Solving Ordinary Differential Equations I: Nonstiff Problems*. 2nd edition. Springer, Berlin; 1993.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.