- Research
- Open Access
- Published:

# The Dynamics of Neural Fields on Bounded Domains: An Interface Approach for Dirichlet Boundary Conditions

*The Journal of Mathematical Neuroscience*
**volume 7**, Article number: 12 (2017)

## Abstract

Continuum neural field equations model the large-scale spatio-temporal dynamics of interacting neurons on a cortical surface. They have been extensively studied, both analytically and numerically, on bounded as well as unbounded domains. Neural field models do not require the specification of boundary conditions. Relatively little attention has been paid to the imposition of neural activity on the boundary, or to its role in inducing patterned states. Here we redress this imbalance by studying neural field models of Amari type (posed on one- and two-dimensional bounded domains) with Dirichlet boundary conditions. The Amari model has a Heaviside nonlinearity that allows for a description of localised solutions of the neural field with an interface dynamics. We show how to generalise this reduced but exact description by deriving a normal velocity rule for an interface that encapsulates boundary effects. The linear stability analysis of localised states in the interface dynamics is used to understand how spatially extended patterns may develop in the absence and presence of boundary conditions. Theoretical results for pattern formation are shown to be in excellent agreement with simulations of the full neural field model. Furthermore, a numerical scheme for the interface dynamics is introduced and used to probe the way in which a Dirichlet boundary condition can limit the growth of labyrinthine structures.

## Introduction

Neural field models are now widely recognised as a natural starting point for modelling the dynamics of cortical tissue. Since their initial inception in the 1970s by Wilson and Cowan [1, 2], Amari [3, 4], and Nunez [5], they have been extensively studied in idealised one-dimensional or planar settings, which are typically either infinite or isomorphic to a torus. This has facilitated both the mathematical and the numerical analyses of spatio-temporal patterns, and much has been learnt about localised states, global periodic patterns, and travelling waves. Indeed there are now a number of reviews summarising work to date, such as [6–9], and how neural field modelling has shed light on large-scale brain rhythms, geometric visual hallucinations, mechanisms for short term memory, motion perception, binocular rivalry, and anaesthesia, to list just a few of the more common application areas. For the most recent perspective on the development and use of neural field modelling we recommend the book by Coombes *et al*. [10], which also includes a tutorial review on the relevant mathematical methodologies (primarily drawn from functional analysis, Turing instability theory, applied nonlinear dynamics, perturbation theory, and scientific computation). This substantial body of knowledge is still expanding with further refinements of the original neural field models to include other important aspects of cortical neurobiology, including axonal delay [11], synaptic plasticity [12], and cortical folding [13], as well as rigorous mathematical results for existence and uniqueness of stationary solutions on bounded subsets of \(\mathbb{R}^{n}\) without regard to imposition of boundary conditions [14], and new numerical algorithms for their evolution and numerical bifurcation analysis [15, 16]. Neural field models are typically expressed in the form of integro-differential equations, whose associated Cauchy problems do not require the specification of boundary conditions. The value attained by the activity variable at the boundary is determined by the initial condition and by the non-local synaptic input. However, very little work has been done on the enforcement of boundary conditions in neural fields, or on their effect on inducing patterned states. An exception to this statement is the work of Laing and Troy [17], who proposed an equivalent partial differential equation (PDE) formulation of the neural field equation. While boundary conditions must be specified in the PDE setting, they are often chosen to ensure the smooth decay of localised solutions rather than model any biophysical constraint. It is already appreciated that the continuum neural fields can be extended to include different properties that can strongly influence the spatio-temporal dynamics of waves and patterns. For example, heterogeneities may give rise to wave scattering [18] or even extinction [19]. The topic we address in this paper is to ponder the role that a boundary can have on spatio-temporal patterning. Given the historical success of analysing neural fields with a Heaviside firing rate, our first step in this direction will be taken within the so-called “Heaviside world” of Amari [20]. Amari’s seminal work developed an approach for analysing localised solutions of neural field models posed on the real line, and has recently been extended to the planar case by Coombes *et al*. [21], albeit assuming that the synaptic connectivity can be expressed in terms of a linear combination of zeroth order modified Bessel functions of the second kind. This approach is not only able to describe localised stationary solutions, often called bumps in one dimension and spots in two dimensions, but also dynamically evolve states such as travelling pulses and their transients as well as spreading labyrinthine patterns. Since the Amari approach, in either one or two spatial dimensions, effectively tracks the boundary between a high and low state of neural activity, where the firing rate switches, we shall refer to it as an *interface dynamics*. Importantly it gives a reduced description of solutions to a neural field model without any approximation. On the down side the approach cannot be generalised to treat smooth firing rate functions, though simulations by many authors have shown that the behaviour of the Amari model is consistent with neural field models utilising a steep sigmoidal function. Here we show how the interface dynamics approach can be generalised to include the effects of Dirichlet boundary conditions for arbitrary choices of synaptic connectivity.

In Sect. 2 we introduce a simple scalar neural field model in the form of an integro-differential equation defined on a finite domain, and discuss natural boundary conditions for neural tissue. Focussing on Dirichlet boundary conditions we develop the key mathematical idea in this paper. Namely that the re-formulation of the original scalar model in terms of the evolution of its gradient allows for an interface description that respects Dirichlet boundary conditions. To illustrate the effectiveness of this approach we first treat the example of localised states in a one-dimensional model in Sect. 3. This is a useful primer for the construction of an interface dynamics in a two-dimensional model, presented in Sect. 4. The first part of Sect. 4 also shows how to generalise the original treatment in [21], for infinite domains, to handle arbitrary choices of the synaptic connectivity function (removing the restriction to combinations of Bessel functions). Localised bump and spot solutions of the interface dynamics are explicitly constructed and their stability determined. In Sect. 5 we extend this approach to treat Dirichlet boundary conditions, and in Sect. 6 we show explicitly how this approach can be used to handle spots and their azimuthal instabilities. We work with standard Mexican hat synaptic connectivities, as well as piece-wise constant caricatures for which calculations simplify. All our theoretical results are found to be in excellent agreement with direct simulations of the original neural field model. We also develop a numerical scheme to evolve the interface dynamics and use this to highlight how a Dirichlet boundary condition can limit the growth of a spreading pattern arising from the azimuthal instability of a spot. Finally in Sect. 7 we discuss natural extensions of the work in this paper.

## A Neural Field Model with a Boundary Condition

Although single neuron models are able to predict dynamical activity of real neurons that have a wide variety of spiking behaviour [22, 23], they are not well suited to describe the behaviour of tissue on the meso- or macro-cortical scale. To a first approximation the cortex is often viewed as being built from a dense reciprocally interconnected network of cortico-cortical axonal pathways [24]. These fibres make connections within the roughly 3 mm outer layer of the cerebrum. Given the large surface area of the (folded) cortex (∼800–\(1500~\mbox{cm}^{2}\)) and its small depth it is sensible to view it as a two-dimensional surface. Neural field modelling, on a line or a surface, is a very well-known framework for capturing the dynamics of cortex at this coarse level of description [10]. As well as being relevant to large-scale electroencephalogram (EEG) and magnetoencephalogram (MEG) neuroimaging studies [8], the understanding of epileptic seizures [25], visual hallucinations [26, 27], and neural spiral waves [28, 29], they have also been used to investigate localised states linked to short term working memory in the prefrontal cortex [30, 31]. In the latter regard the idealised neural field model of Amari has proven especially advantageous [32]. This was originally posed on an infinite domain, without regard to the role of boundary conditions in shaping or creating patterns. However, the neural circuits of the neocortex are adapted to many different tasks, giving rise to functionally distinct areas such as the prefrontal cortex (for problem solving), motor association cortex (for coordination of complex movement), the primary sensory cortices (for vision, hearing, somatic sensation), Wernicke’s area (language comprehension), Broca’s Area (speech production and articulation), etc. Thus it would seem reasonable to parcellate their functional activity by the use of appropriate boundaries and boundary conditions. Previous work by Daunizeau *et al*. [33] on dynamic causal modelling for evoked responses using neural field equations has used Dirichlet boundary conditions. Here we extend the standard Amari model with the inclusion of a finite domain with an imposed Dirichlet boundary condition that *clamps* neural activity at the boundary to a specific value. Of course other choices are possible, though this one is a natural way to enforce a functional separation between cortical areas.

The scalar neural field model that we consider is given by

where *Ω* is a planar domain \(\varOmega\subseteq \mathbb{R}^{2}\), with \(\boldsymbol {x} \in\varOmega\) and \(t \in \mathbb{R}^{+}\). The variable *u* represents synaptic activity and the kernel *w* represents anatomical connectivity. For simplicity we shall only consider the case that this depends on Euclidean distance. The nonlinear function *H* represents the firing rate of the tissue and will be taken to be a Heaviside so that the parameter *κ* is interpreted as a firing threshold. We assume that a suitable initial condition is specified for (1), and we aim to impose on the corresponding solution \(u(\boldsymbol {x},t)\) the Dirichlet boundary condition

where \(u_{\text{BC}}\) is the prescribed boundary activity. For simplicity, we treat the case of homogeneous boundary conditions.

It was the essential insight of Amari that the Heaviside choice allows the explicit construction of localised states (stationary bumps and travelling pulses) on infinite domains, as well as the construction of these on finite domains without a boundary condition. Our key observation that allows the extension of the Amari approach to handle the boundary condition (2) is to *expose* this constraint by writing the state of the system in terms of a line integral:

Here \(\varGamma(\boldsymbol {x})\) denotes an arbitrary path that connects a point on the boundary to the point ** x** within its interior, and \(\boldsymbol {z} = \nabla_{\boldsymbol {x}} u \in \mathbb{R}^{2}\). An evolution equation for

**is easily constructed by differentiation of (1) to give**

*z* with *u* given by (3).

We shall now consider Eqs. (3) and (4) as the neural field model of choice, and in the next sections develop the extension of the Amari interface dynamics. To set the scene we first consider a one-dimensional spatial model with a focus on stationary bump solutions.

## One Spatial Dimension: A Primer

Prior to describing the analysis for a two-dimensional Amari neural field model with a Dirichlet boundary condition, we first consider the more tractable one-dimensional case. This illustrates the main components of our mathematical approach, as well as delivers new results about stable boundary induced bumps.

The one-dimensional version of (3) and (4) on the finite domain \([-L,L]\) with an imposed boundary condition takes the explicit form

with

Here \(x \in[-L,L]\), \(t \in \mathbb{R}^{+}\), and \(u_{\text{BC}}\) denotes a constant boundary value imposed on the left end of the interval, namely \(u(-L)=u_{\text{BC}}\). In passing, we note that \(u(L)\) is determined once \(u(-L)\) is fixed, and some choices of \(u_{\text{BC}}\) will result in an even bump \(u(x)\), for which \(u(L) = u(-L) = u_{\text{BC}}\). We now focus on a bump solution for which \(R(u) = \{ u(x) >\kappa\}\) is a finite, connected open interval. The *edges* of the bump \(x_{i}(t)\), \(i=1,2\), are defined by a level-set condition that takes the form

We shall refer to the two bump edges as the *interface*, as they naturally separate regions of high and low firing activity. The differentiation of the level-set condition (7) generates a rule for the evolution of the interface according to

Using the second fundamental theorem of calculus we obtain an expression for the interfacial velocities

where

A closed form expression for \(z(x,t)\) may also be found by integrating (5) using the method of variation of parameters to give

where \(\eta(t) = \mathrm { e}^{-t}H(t)\), and *H* is the Heaviside step function. Equations (9)–(11) determine the interface dynamics for time-dependent spatially localised bump solutions that respect the Dirichlet boundary condition.

Since it is well known that the Amari model supports a stationary bump solution when the synaptic connectivity has a Mexican hat shape we now revisit this scenario and choose

where \(b_{1},b_{2},c >0\). Moreover, we will focus on the case that the stationary bump is symmetric about the origin. In this case demanding that the interface velocity is equal to zero requires that the numerator in (9) vanish. The formula for *ψ* given by (10) will also become time independent, and if we denote this by \(\mathcal{P}(x)\) then we have

where we have set \(x_{1} = -\Delta/2\) and \(x_{2}=\Delta/2\) so that the *bump width* is given by \(\Delta= x_{2}-x_{1}\). The formula for \(\mathcal{P}\) is easily calculated as \(\mathcal{P}(x) = p(x;a_{2},b_{2})-p(x;a_{1},b_{1})\), where

Hence, the bump width is determined implicitly by the single Eq. (13), and the bump shape, \(q(x)\), is calculated from (6) as

To determine the stability of the bump solution we can follow the original approach of Amari and linearise the interface dynamics around the stationary values for \(x_{i}\). Alternatively we can follow the Evans function approach, reviewed in [34], which considers perturbations at all values of *x* (rather than just at the bump edges). Here we pursue the latter approach, though it is straightforward to check that the former approach gives the same answer.

To determine the linear stability of a bump we write \(u(x,t) = q(x) + \mathrm { e}^{\lambda t} \tilde{u}(x) \) where \(\tilde{u} \ll 1\). In this case the corresponding change to *z* is given by \(z(x,t) = \,\mathrm{d} q(x)/ \,\mathrm{d} x + \mathrm { e}^{\lambda t} \tilde{z}(x)\), where \(\tilde{z}(x) = \partial\tilde{u}(x) /\partial x\). Expanding (5) to first order gives

For the Dirac-delta function occurring under the integral, we can use the formal identity

and integrate (16) from −*L* to *x* and use \(\tilde {u}(-L)=0\) to obtain

Here \(q'(x)=\mathcal{P}'(x) = w( \vert x-x_{1} \vert )-w( \vert x-x_{2} \vert )\).

From (18) we may generate two equations for the amplitudes \((\tilde{u}(x_{1}),\tilde{u}(x_{2}))\) by setting \(x=x_{1}\) and \(x=x_{2}\). This gives a linear system of equations that we can write in the form \([\mathcal{A} -(\lambda+1) I](\tilde{u}(x_{1}),\tilde {u}(x_{2}))=(0,0)\), where

Requiring non-trivial solutions gives a formula for the spectrum as \(\det[\mathcal{A} -(\lambda+1) I]=0\), which yields

Hence a bump solution will be stable provided \(\text{Re} \lambda_{\pm}<0\).

In Fig. 1 we plot the bump width as a function of the threshold *κ* for a neural field posed on a finite domain Fig. 1(A) and, for the reformulated neural field with an imposed Dirichlet boundary condition \(u_{\mathrm{BC}}=0\) Fig. 1(B), using solid (dashed) lines for stable (unstable) solutions. For the former case we recover the expected Amari result, namely that there is coexistence between two bumps, the widest of which is stable. However, when we impose a Dirichlet boundary condition, four coexisting bumps are found for sufficiently large *κ*, and two of these bumps are stable. In other words, the Dirichlet boundary condition induces a new stable bump, whose active region occupies a large portion of the domain.

## Two Spatial Dimensions: Infinite Domain

Before discussing the extension of Sect. 3 to a finite two-dimensional domain with an imposed boundary condition it is first instructive to consider the problem posed on \(\mathbb{R}^{2}\). An interface description for this case was originally developed in [21], albeit for a special choice of synaptic connectivity kernel. By exploiting certain properties of the modified Bessel function of the second kind it was possible to reformulate integrals over two-dimensional domains in terms of one-dimensional line integrals. This allowed the interface dynamics to be expressed solely in terms of the shape of the active region in the tissue, namely a one-dimensional closed curve. Here we extend this approach to a far more general class of synaptic connectivity kernels, which include combinations of radially symmetric Gaussian functions (12).

We consider the integro-differential equation given by (1) with \(\varOmega= \mathbb{R}^{2}\). We decompose the domain *Ω* by writing \(\varOmega= \varOmega_{+} \cup\partial\varOmega_{+} \cup \varOmega_{-}\) where \(\partial\varOmega_{+}\) represents the level-set which separates \(\varOmega_{+}\) (excited) and \(\varOmega_{-}\)(quiescent) regions. These domains are given explicitly by \(\varOmega_{+} = \{ \boldsymbol {x} | u(\boldsymbol {x}) >\kappa\}\), \(\varOmega_{-} = \{ \boldsymbol {x} | u(\boldsymbol {x}) <\kappa\}\), and \(\partial\varOmega_{+} = \{ \boldsymbol {x} | u(\boldsymbol {x}) =\kappa\}\). We shall assume that \(\partial\varOmega_{+}\) is a closed contour (or a finite set of disconnected closed contours). In Fig. 2 we show a direct numerical simulation of the full space–time model to illustrate that a synaptic connectivity function that is a radially symmetric difference of Gaussians can support a spreading labyrinthine pattern. Similar patterns have previously been reported and discussed in [21, 35] for both Heaviside and steep sigmoidal firing rate functions. A description of the numerical scheme used to evolve the full space–time model is given in Appendix 1. Differentiation of the level-set condition \(u(\partial\varOmega_{+}(t),t)=\kappa\) gives the normal velocity rule:

where we have introduced the normal vector \(\boldsymbol {n} = - \nabla_{\boldsymbol {x}} u / \vert \nabla_{\boldsymbol {x}} u \vert \) along \(\partial\varOmega_{+}(t)\). We will now show that \(c_{n}\) can be expressed solely in terms of integrals along \(\partial\varOmega_{+}(t)\). Let us first consider the denominator in (21). The temporal integration of (4), using variation of parameters, gives

where \(\eta(t) = \mathrm { e}^{-t} H(t)\), \(z_{0}(\boldsymbol {x})=\nabla_{\boldsymbol {x}} u(\boldsymbol {x},0)\) denotes gradient information at \(t=0\), and

The term \(\nabla_{\boldsymbol {x}} \psi\) in (22) can be constructed as a line integral using the integral vector identity:

Thus the denominator in (21) can be expressed solely in terms of a line integral around the contour \(\partial\varOmega_{+}(t)\). The representation of the numerator in (21) in terms of a line integral rather than a double integral is more challenging. In previous work we have shown that this can be achieved for the special case that the weight kernel is constructed from a linear combination of zeroth order modified Bessel functions of the second kind [21]. In Appendix 2 we show that a line-integral representation can be constructed for a far more general class of anatomical connectivity patterns, making use of the divergence theorem. Using this result the numerator of (21) can be written

where

and *s* is a parametrisation for points on the contour \(\boldsymbol {\gamma} \in\partial\varOmega_{+}\). Here,

Hence the normal velocity rule (21) can be expressed solely in terms of one-dimensional line integrals involving the shape of the active region \(\varOmega_{+}\) (which is prescribed by \(\partial\varOmega_{+}\)). This is a substantial reduction in description as compared to the full space–time model, yet is exact.

As an example of the approach above let us consider a difference of Gaussians with \(w(r)\) given by (12). A simple calculation for this choice shows that \(\mathcal{K}= \sqrt{\pi/c} [a_{1} \sqrt{b_{1}} - a_{2} \sqrt{b_{2}}]\) and

In Fig. 3 we show a numerical simulation prescribed by the interface method, with initial data equivalent to that from the full space–time simulation shown in Fig. 2. The excellent agreement between the two figures is easily observed. The full details of our numerical scheme for implementing the interface dynamics are given in Appendix 3.

## Two Spatial Dimensions: Dirichlet Boundary Condition

Using the notation of Sect. 4 we now show how to extend the one-dimensional approach of Sect. 3 to develop an interface dynamics for planar Amari models on a bounded domain with Dirichlet boundary conditions. For a single active region the dynamics for \(\boldsymbol {z}(x,t)\) is given by (4), which can be written succinctly as

with *ψ* given by (23), or in terms of \(\partial \varOmega_{+}(t)\), by (26). Using (3) the level-set condition is

Using the identity

and differentiating (30) with respect to *t*, we obtain the normal velocity rule

Here the normal vector is given by \(\boldsymbol {n} = -\boldsymbol {z}/ \vert \boldsymbol {z} \vert \) along the contour \(\partial\varOmega_{+}\). Using (29) we may write the numerator in the normal velocity rule (32) as

where \(\boldsymbol {\zeta}: \partial\varOmega_{+}(t) \rightarrow\partial\varOmega \) is a mapping from points on the contour \(\partial\varOmega_{+}(t)\) to points on the boundary *∂Ω*.

Hence, using the formulae for ** z** and

*ψ*from Sect. 4, namely Eqs. (22), (24), and (26), then all of the terms in the normal velocity rule (32) may be expressed as one-dimensional line integrals. This yields the interface dynamics for Dirichlet boundary conditions, and once again we see that it is a reduced yet exact alternative formulation to the full space–time model. In contrast to the interface dynamics on an infinite domain one needs only develop further numerical algorithms for computing the line integral in (3). The numerical method for implementing the interface dynamics can be based upon that, for an infinite domain, with a specific choice for the paths

*Γ*defining this integral. Each of the paths

*Γ*connects a point

**in the interior of the domain to a point on the boundary, and we set \(\boldsymbol {\zeta}(\partial\varOmega_{+}(t))\) to be the endpoint of \(\varGamma (\partial\varOmega_{+}(t))\) (see Appendix 3 for details on the numerical scheme).**

*x*Note that we do not have to numerically integrate along this path (to determine the normal velocity), and that we need only to determine the values of \(\psi(\boldsymbol {x},t)\) at the two endpoints.

Figure 4 shows a direct numerical simulation computed using the evolution of the gradient \(\boldsymbol {z}= \nabla_{\boldsymbol {x}} u\) as well as the corresponding interface dynamics. We see excellent agreement between the two approaches. The obvious advantage of the interface dynamics is that one need only evolve the shape of the active region to fully reconstruct the full space–time dynamics using (3) and (24). We see from Fig. 4 that the main effect of the Dirichlet boundary condition is to limit the spread of a labyrinthine structure and ultimately induce a highly structured stationary pattern, as expected.

## Spots in a Circular Domain: Dirichlet Boundary Condition

Given the large amount of historical interest in spot solutions of neural field models on infinite domains, and those on finite domains without incorporating the role of boundary conditions [35–39], it is worthwhile to revisit this specific class of solutions on a finite disc with an imposed Dirichlet boundary condition. We shall consider radially symmetric synaptic connectivity kernels and a disc of radius *D* with a spot (circularly symmetric) solution of radius *R*. In this case \(u(\boldsymbol {r},t)=q(r)\) with \(r= \vert \boldsymbol {r} \vert \) for all *t*, and \(q(D)=u_{\text{BC}}\), with \(q(R)=\kappa\) and \(q(r) > \kappa \) for \(r< R\) and \(q(r) < \kappa\) for \(R< r< D\). We shall denote the corresponding stationary field for *ψ* by \(\psi (r)\), and this is conveniently constructed from (26).

### Construction

An implicit equation for the radius of the bump is obtained after setting the normal velocity to zero. Using (32) and (33) this yields

For the specific choice of a difference of Gaussians given by (12) we may write this in the form

with

and \(\mathcal{Q}(\theta) = \sqrt{R^{2} + r^{2} -2 R r \cos\theta}\). Although (36) is in closed form it is a challenge to perform the integral analytically. Thus it is also of interest to consider synaptic connectivity kernels for which more explicit progress can be made. A case in point is that of piece-wise constant functions.

Let us first consider a top-hat connectivity defined by

In this case it is easier to construct \(\psi(r)\) directly from (23) as

For the top-hat shape (37) we may split the above integral as

Introducing the area \(A_{+}(r,\sigma)\) as

where \(A_{+}(R, \sigma) = \kappa\), the self-consistent equation for a spot (34) takes the form

Following the work of Herrmann *et al*. [40], we now show how to evaluate the integral (40) using simple geometric ideas. For example, the area \(A_{+}(R, \sigma)\) can be calculated in terms of the area of overlap of two circles, one of centre **0** and radius \(\vert \boldsymbol {r} \vert=R\), and the other of centre ** r** and radius

*σ*subject to the constraint \(r=R\).

Using the results from Appendix 4 we have \(A_{+}(r, \sigma)=A(R,\phi_{0}(r,\sigma))+A(\sigma,\phi_{1}(r,\sigma ))\), where \(A(r,\phi) = r^{2}(\phi-\sin\phi)/2\) and

with \(R>D-\sigma\).

Another natural piece-wise constant choice is the piece-wise constant Mexican hat shape given by

Using a similar argument to that for the top-hat connectivity we find that

with \(R>D-\sigma_{1}\).

### Stability

The stability of spots without boundary conditions has been treated by several authors, and see [38] for a recent overview. Here we extend this approach to treat a finite domain with an imposed Dirichlet boundary condition following very similar arguments to those presented in Sect. 3.

To determine the linear stability of a spot we write \(u(\boldsymbol {r},t) = q(r) + \mathrm { e}^{\lambda t} \cos(m \theta) \tilde{u}(r) \) where \(\tilde{u} \ll1\) and \(m\in \mathbb{N}\). In this case the corresponding change to ** z** is given by \(\boldsymbol {z}(\boldsymbol {r},t) = \nabla_{\boldsymbol {r}} q(r)+ \mathrm { e}^{\lambda t} \cos(m \theta) \tilde{ \boldsymbol {z}}(\boldsymbol {r})\), where \(\tilde{\boldsymbol {z}}(\boldsymbol {r})= \nabla_{\boldsymbol {r}} \tilde{u}(r)\). Expanding (4) to first order gives

where \(\vert \boldsymbol {r}-\boldsymbol {r}' \vert =\sqrt{r^{2}+r^{\prime 2} -2 r r' \cos\theta}\). Using properties of the Dirac-delta distribution we find

Since the term in square brackets in (46) is radially symmetric we may integrate in the radial direction using \(\tilde{u}(D)=0\) to obtain

Setting \(r=R\) in (47) and demanding non-trivial solutions gives an equation for the eigenvalues *λ* in the form \(\mathcal{E}_{m}(\lambda)=0\), \(m \in \mathbb{N}\), where

Thus a spot solution will be stable provided \(\lambda_{m} <0\) for all \(m \in \mathbb{N}\) where \(\lambda_{m}\) is a zero of \(\mathcal{E}_{m}(\lambda)\). Once again the choice of a piece-wise constant connectivity function considerably simplifies further calculations. For example for the top-hat function given by (37) it is simple to show that

and

where \(\theta^{*}\) is the smaller of the two roots of the equation \(R \sqrt{2(1-\cos\theta)}=\sigma\) for \(\theta\in[0,2 \pi)\). Equation (50) allows for the explicit evaluation of (48) for a piece-wise constant synaptic connectivity.

Using the above analysis we find that, for the smooth Mexican hat function, given by (12), that for large domains a wide and narrow spot can coexist for a sufficiently low value of the threshold *κ*. Moreover, the narrow spots are always unstable (to modes with \(m=0\), reflecting uniform changes of size), whilst the wider spots can develop instabilities to modes with \(m \geq2\). We note that the mode with \(m=1\) is always expected to exist due to rotational invariance (and would give rise to a zero eigenvalue for all parameter values). This is entirely consistent with previous results for Mexican hat connectivities on domains where no boundary condition is used, as reviewed in [38]. However, on a finite size disc and with an imposed Dirichlet boundary condition further spots can be induced, with sizes commensurate to that of the radius of the disc. Both of these scenarios are summarised with the use of Fig. 5. Qualitatively similar behaviour is found for the piece-wise constant Mexican hat function given by (43) (not shown). Interestingly for the simple top-hat connectivity, given by (37), we find similar results for existence, though without azimuthal instabilities to modes \(m \geq2\).

## Discussion

In this paper we have revisited the seminal work of Amari on neural fields and shown how to incorporate Dirichlet boundary conditions. We have built on the previous work of Coombes *et al*. [21] to develop an interface dynamics approach for the evolution of closed curves defining pattern boundaries. Compared to the full space–time model with imposed Dirichlet boundary conditions the interface dynamics is reduced, yet requires no approximations. The interface framework has been illustrated in a number of settings in both one- and two-dimensions, with a focus on localised states and their instabilities. In all cases we have highlighted the excellent correspondence between results obtained from numerical simulations of the full space–time model and the interface approach. Moreover, we have also emphasised that for piece-wise constant synaptic connectivities the interface approach becomes quasi-analytical, in that many of the terms required for the computation of the normal velocity of the interface can be calculated by hand rather than have to be found numerically. For spreading patterns that may arise from the azimuthal instability of a localised spot, the main effect of a Dirichlet boundary condition has been to limit the growth of the pattern. This was entirely expected, although the precise shape of the resulting stationary pattern is of course hard to predict without simulation. However, the induction of other branches of localised states in a neural field model on a disc was more surprising, even though all near the boundary proved to be stable. It should be noted that the imposition of different boundary conditions may effect the spatio-temporal evolution of a pattern and the conditions for its dynamics instability. For the sake of computational simplicity, the value attained by the activity variable at the boundary was chosen to be a constant (\(u_{BC}=0\)) throughout this paper. However, a full analysis which also treats space- and time-dependent boundary conditions can be readily developed for the direct numerical simulations, as well as for the equivalent interface description. There are a number of natural extensions of the approach that we have presented here to treat other, more biophysically rich, neural field models which we outline below.

Although we have focussed exclusively on the Amari model with a Heaviside firing rate, direct numerical simulations (not shown) readily confirm that boundary induced patterns can also be seen in models with a smooth sigmoidal firing rate. Thus it would also be of interest to extend the elegant functional analytic treatment of localised states in bounded domains by Faugeras *et al*. [14] to incorporate imposed boundary conditions. Some form of spike frequency adaptation (SFA) is often included in neural field models to mimic a negative feedback process to diminish sustained firing. This can cause a travelling front to transition to a travelling pulse [41], or subserve the generation of planar spiral waves [28]. If this current is linear, as is often the case [42], or itself described by dynamics involving a Heaviside switch, as in [43], then the interface approach presented here can be generalised. Given previous work on equivalent PDE models on bounded domains with SFA that analyses spiral wave behaviour, the treatment of spiral waves from an interface perspective would be an advance as it is not limited to synaptic connectivities with a rational Fourier structure [44]. Another natural extension of the work in this paper is to neural fields on *feature spaces*. For example, in the primary visual cortex (V1), cells respond preferentially to lines and edges of a particular orientation. A standard neural field model that links points at ** r** and \(\boldsymbol {r}'\) (in the plane) with a weight \(w(\boldsymbol {r}|\boldsymbol {r}')\), should be replaced by a more general form such as \(w(\boldsymbol {r}|\boldsymbol {r}')=w(\boldsymbol {r},\theta|\boldsymbol {r}',\theta')\), where

*θ*(\(\theta'\)) would represent an orientation preference at

**(\(\boldsymbol {r}'\)). This model has recently been studied using a neural field dynamics with a Heaviside firing rate [45], and is thus ripe for a further analysis using an interface approach. Finally it is worth pointing out the rather pertinent difference between the flat models we have discussed here and the well-known folded characteristic of real cortex, with its sulci and gyri. Fortunately there is no substantial difficulty in formulating neural field models on curved surfaces, though to date there has been surprisingly little analysis of spatio-temporal pattern formation in this context. The exception to this rule is the simulation studies of Bojak**

*r**et al*. [46], and the recent work of Sato

*et al*. for growing brains [13].

One obvious caveat to all of the above is that the interface approach is restricted to Amari style models with a Heaviside firing rate. Nonetheless the qualitative similarities between Amari models and those with a steep sigmoidal firing rate are well known. In summary the treatment of neural fields with boundary conditions is a relatively unexplored area of mathematical neuroscience whose further study should pay dividends for the understanding of neuroimaging data, and in particular waves of activity in functionally identified and folded cortices.

## References

- 1.
Wilson HR, Cowan JD: Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24.

- 2.
Wilson HR, Cowan JD: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik. 1973;13:55–80.

- 3.
Amari S: Homogeneous nets of neuron-like elements. Biol Cybern. 1975;17:211–20.

- 4.
Amari S: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27:77–87.

- 5.
Nunez PL: The brain wave equation: a model for the EEG. Math Biosci. 1974;21:279–97.

- 6.
Ermentrout GB: Neural nets as spatio-temporal pattern forming systems. Rep Prog Phys. 1998;61:353–430.

- 7.
Coombes S: Waves, bumps, and patterns in neural field theories. Biol Cybern. 2005;93:91–108.

- 8.
Coombes S: Large-scale neural dynamics: Simple and complex. NeuroImage. 2010;52:731–9.

- 9.
Bressloff PC: Spatiotemporal dynamics of continuum neural fields. J Phys A. 2012;45:033001.

- 10.
Coombes S, beim Graben P, Potthast R, Wright JJ (eds.): Neural fields: theory and applications. Berlin: Springer; 2014.

- 11.
Visser S, Nicks R, Faugeras O, Coombes S: Standing and travelling waves in a spherical brain model: the nunez model revisited. Physica D. 2017;349:27–45.

- 12.
Wilson MT, Fung PK, Robinson PA, Shemmell J, Reynolds JNJ: Calcium dependent plasticity applied to repetitive transcranial magnetic stimulation with a neural field model. J Comput Neurosci. 2016;41(1):107–25.

- 13.
Sato Y, Shimaoka D, Fujimoto K, Taga G: Neural field dynamics for growing brains. Nonlinear Theory Appl, IEICE. 2016;7:226–33.

- 14.
Faugeras O, Veltz R, Grimbert F: Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networks. Neural Comput. 2009;21:147–87.

- 15.
Lima PM, Buckwar E: Numerical solution of the neural field equation in the two-dimensional case. SIAM J Sci Comput. 2015;37:962–79.

- 16.
Rankin J, Avitabile D, Baladron J, Faye G, Lloyd DJB: Continuation of localized coherent structures in nonlocal neural field equations. SIAM J Sci Comput. 2014;36:70–93.

- 17.
Laing CR, Troy WC: PDE methods for nonlocal models. SIAM J Appl Dyn Syst. 2003;2:487–516.

- 18.
Goulet J, Ermentrout GB: The mechanisms for compression and reflection of cortical waves. Biol Cybern. 2011;105:253–68.

- 19.
Bressloff PC: From invasion to extinction in heterogeneous neural fields. J Math Neurosci. 2012;2(1):6.

- 20.
Amari S: Heaviside world: excitation and self-organization of neural fields. In: Coombes S, beim Graben P, Potthast R, Wright JJ, editors. Neural fields: theory and applications. Berlin: Springer; 2014

- 21.
Coombes S, Schmidt H, Bojak I: Interface dynamics in planar neural field models. J Math Neurosci. 2012;2(1):9.

- 22.
Gerstner W, Kistler W: Spiking neuron models. single neurons, populations, plasticity. Cambridge: Cambridge University Press; 2002.

- 23.
Izhikevich EM: Simple model of spiking neurons. IEEE Trans Neural Netw. 2003;14:1569–72.

- 24.
Mountcastle VB: The columnar organization of the neocortex. Brain. 1997;120:701–22.

- 25.
Zhao X, Robinson PA: Generalized seizures in a neural field model with bursting dynamics. J Comput Neurosci. 2015;39:197–216.

- 26.
Ermentrout GB, Cowan JD: A mathematical theory of visual hallucination patterns. Biol Cybern. 1979;34:137–50.

- 27.
Bressloff PC, Cowan JD, Golubitsky M, Thomas PJ, Wiener MC: Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philos Trans R Soc Lond B, Biol Sci. 2001;356:299–330.

- 28.
Laing CR: Spiral waves in nonlocal equations. SIAM J Appl Dyn Syst. 2005;4:588–606.

- 29.
Huang X, Xu W, Liang J, Takagaki K, Gao X, Wu J-Y: Spiral wave dynamics in neocortex. Neuron. 2010;68:978–90.

- 30.
Goldman-Rakic P: Cellular basis of working memory. Neuron. 1995;14:477–85.

- 31.
Wang X-J: Synaptic reverberation underlying mnemonic persistent activity. Trends Neurosci. 2001;24:455–63.

- 32.
Amari S: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27:77–87.

- 33.
Daunizeau J, Kiebel SJ, Friston KJ: Dynamic causal modelling of distributed electromagnetic responses. NeuroImage. 2009;47:590–601.

- 34.
Coombes S, Owen MR: Evans functions for integral neural field equations with Heaviside firing rate function. SIAM J Appl Dyn Syst. 2004;34:574–600.

- 35.
Owen MR, Laing CR, Coombes S: Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J Phys. 2007;9(10):378.

- 36.
Taylor JG: Neural ‘bubble’ dynamics in two dimensions: Foundations. Biol Cybern. 1999;80:393–409.

- 37.
Folias SE, Bressloff PC: Breathers in two-dimensional neural media. Phys Rev Lett. 2005;95:208107.

- 38.
Bressloff PC, Coombes S: Neural ‘bubble’ dynamics revisited. Cogn Comput. 2013;5:281–94.

- 39.
Coombes S, Schmidt H, Avitabile D: Spots: breathing, drifting and scattering in a neural field model. In: Coombes S, beim Graben P, Potthast R, Wright JJ, editors. Neural field theory. Berlin: Springer; 2014.

- 40.
Herrmann JM, Schrobsdorff H, Geisel T: Localized activations in a simple neural field model. Neurocomputing. 2005;65-66:679–84.

- 41.
Pinto DJ, Ermentrout GB: Spatially structured activity in synaptically coupled neuronal networks: I. Travelling fronts and pulses. SIAM J Appl Math. 2001;62:206–25.

- 42.
Ermentrout GB, Folias SE, Kilpatrick ZP: Spatiotemporal pattern formation in neural fields with linear adaptation. In: Coombes S, beim Graben P, Potthast R, Wright JJ, editors. Neural fields: theory and applications. Berlin: Springer; 2014.

- 43.
Coombes S, Owen MR: Exotic dynamics in a firing rate model of neural tissue with threshold accommodation. Contemp Math. 2007;440:123–44.

- 44.
Laing CR: PDE methods for two-dimensional neural fields. In: Coombes S, beim Graben P, Potthast R, Wright JJ, editors. Neural fields: theory and applications. Berlin: Springer; 2014.

- 45.
Carroll S, Bressloff PC: Phase equation for patterns of orientation selectivity in a neural field model of visual cortex. SIAM J Appl Dyn Syst. 2016;15:60–83.

- 46.
Bojak I, Oostendorp TF, Reid AT, Kötter R: Connecting mean field models of neural activity to EEG and fMRI data. Brain Topogr. 2010;23:139–49.

- 47.
Atkinson KE: A survey of numerical methods for solving nonlinear integral equations. J Integral Equ Appl. 1992;4(1):15–46.

- 48.
Goldstein RE, Muraki DJ, Petrich DM: Interface proliferation and the growth of labyrinths in a reaction-diffusion system. Phys Rev E. 1996;53:3933.

- 49.
D’Errico J: Interparc, MATLAB Central File Exchange. [Retrieved from Feb 01, 2012]. http://www.mathworks.com/matlabcentral/fileexchange/34874

### Acknowledgements

AG acknowledges the support from The University of Nottingham and Ministry of National Education in Turkey. We thank the anonymous referees for their helpful comments on our manuscript.

### Availability of data and materials

Please contact author for data requests.

### Funding

SC was supported by the European Commission through the FP7 Marie Curie Initial Training Network 289146, NETT: Neural Engineering Transformative Technologies.

## Author information

### Affiliations

### Contributions

AG, DA and SC contributed equally. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Aytül Gökçe.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Additional information

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Appendices

### Appendix 1: Numerical Scheme for the Full Space-Time Model

The numerical simulation of the full space–time model (1) without boundary conditions was performed by discretising the domain on an *N*-by-*N* tensor grid, and using a Nyström scheme for the spatial discretisation [47]. It is known that the major costs of this scheme is in the evaluation of the Hammerstein operator occurring on the right-hand side of (1), which on the grid outlined above requires \(N^{4}\) operations. Owing to the convolutional structure of the operator, it is possible to decrease considerably the computational cost of each function evaluation by performing a pseudo-spectral evaluation of the convolution, using a Fast Fourier Transform (FFT), followed by an inverse Fast Fourier Transform (IFFT). This reduces the number of operations to \(O(N^{2} \log N^{2})\) and allows one to simulate neural fields and compute equilibria efficiently. We refer the reader to [16, 21] for further details. In these calculations we set \(N=2^{9}\) and used Matlab’s in-built ode45 routine, with standard tolerance settings.

In simulations where the state variable is not periodic, such as the ones where we enforced boundary conditions (Sect. 5) we used a standard matrix-vector multiplication to evaluate the integral operator. A full matrix was precomputed and stored during the initialisation phase of the time stepping and the grid-size was limited to \(N=2^{7}\) points, owing to memory constraints. In this setting, the interface dynamic approach becomes a viable alternative to the full spatial simulation.

### Appendix 2: Expressing *ψ* in Terms of Contour Integrals

In this appendix, we derive the identities (26) and (27). This allows us to represent the double integral for the non-local input \(\psi(\boldsymbol {x},t)\) given by (23) as an equivalent line integral. We recall divergence theorem for a generic vector field ** F** on a domain \(\mathcal{B}\) with boundary \(\partial\mathcal{B}\),

where ** n** is the unit normal vector on \(\partial\mathcal{B}\). We consider a rotationally symmetric two-dimensional synaptic weight kernel \(w(\boldsymbol {x})=w(r)\) which satisfies \(\int_{\mathbb{R}^{2}} \,\mathrm{d}\boldsymbol {x} w(\boldsymbol {x})= \mathcal{K}\), for some finite constant \(\mathcal{K}\), and we introduce a function \(g(\boldsymbol {x}):\mathbb{R}^{2} \rightarrow \mathbb{R}\) such that

Now considering a function \(\varphi(r): \mathbb{R}^{+} \rightarrow \mathbb{R}\) which satisfies the condition \(\lim_{r \rightarrow\infty} r \varphi (r)=0\), the vector field can be written using polar coordinates, that is, \(\boldsymbol {F} = \varphi(r)(\cos\theta,\sin\theta) = \varphi(r) \boldsymbol {x}/ \vert \boldsymbol {x} \vert \) with \(\boldsymbol {x}=r(\cos\theta,\sin\theta)\). Transforming the expressions \(\mathcal{K}\) and *g* into polar coordinates, integrating Eq. (52), and using the divergence theorem, yields

where the line integral is described over a circle of radius \(R\rightarrow\infty\). Therefore, the weight kernel can be written in the form

Since the line integral vanishes, we may set \(g(\boldsymbol {x})=\mathcal{K} \delta(\boldsymbol {x})\). We can now deduce the equation for \(\varphi(r)\) by writing

The integration of (56) yields

Using the above results means that (23) can be evaluated as

Here \(\boldsymbol {\gamma} \in\partial\varOmega_{+}\), and the integration over the Dirac-delta function gives \(C = 1\) if ** x** is within \(\varOmega _{+}\), \(C = 0\) if

**is outside \(\varOmega_{+}\), and \(C = 1/2\) if**

*x***is on the boundary of \(\varOmega_{+}\).**

*x*### Appendix 3: Numerical Scheme for the Interface Dynamics

Time stepping for interface dynamics requires a novel integration scheme. We present here the implementation used in our numerical experiments, which were found to be in agreement with the full spatio-temporal simulation, and we defer a numerical analytical study of its properties to a later date.

The method is formed of four constitutive parts: a scheme for approximating a closed curve (the interface \(\partial\varOmega_{+}(t)\)), a scheme to approximate the instantaneous normal velocity of the interface, a scheme to propagate the contour according to the normal velocity, and a strategy to remesh or postprocess the contour, if needed.

Closed contours: We chose a periodic parametrisation

where \(\xi_{1}\), \(\xi_{2}\) are smooth and 2*π*-periodic in *s* for all *t*, and we approximated \(\xi_{1}(s,\cdot)\), \(\xi_{2}(s,\cdot)\) spectrally, using evenly spaced points in *s*. Using FFTs, we could approximate quickly and accurately the normal and tangent vectors to \(\partial\varOmega_{+}\) at each point *s*, and each time *t*. We also parametrise the domain boundary *∂Ω* as follows:

where \(\eta_{i}\) are continuous and 2*π*-periodic. We choose the paths *Γ* to be straight lines connecting ** x** to its closest point on the boundary,

and we define ** ζ** using the endpoints of

*Γ*,

Normal velocity: To simplify the discussion, let us consider the case where boundary conditions are not imposed. We compute the normal velocity by approximating the numerator and denominator of (21). The denominator is updated at each time step using (22) and (24): this requires the normal to \(\partial\varOmega_{+}\) at time *t*, as well as the full history of \(\partial\varOmega_{+}(t)\) in the interval \([0,t]\), in view of the integral (22). However, since \(\eta(t) = \mathrm { e}^{-t} H(t)\), we found that retaining in memory and using only the last 20–50 time steps was not detrimental for the accuracy of the solution; we use the trapezium rule for both the integration along the contour in (24) (which is therefore spectrally accurate) and for the integral over \(t'\) in (22). The numerator of (21) is computed using (26), for which a further integration along \(\partial\varOmega_{+}\) is performed. The integrand is singular, and can be treated as in [48]. A similar strategy is used for (32) and (33), in the case of a bounded domain with Dirichlet boundary conditions. In all cases this step is by far the most time consuming of the algorithm, due to the large number of integrals which need to be evaluated at each step.

Position update: The contour is propagated in the normal direction, using the velocity computed at each point of the contour, \(c_{n}(s,t)\). To this end we use a simple Euler update \(\partial\varOmega_{+}(t+\Delta t) = \partial \varOmega_{+}(t) +c_{n} \Delta t\) to find the new contour, given that \(c_{n}\) is also computed with \(O(\Delta t)\) accuracy in time. Other choices are obviously possible, but require more function evaluations and more expensive quadrature rules. A stepsize of 0.05 or less has been used in our simulations.

Remeshing and postprocessing: The updated contour \(\partial \varOmega_{+}(t+\Delta t)\) leads to a new parametrisation, that is, to an update of the functions \(\xi_{1}(s,\cdot)\), \(\xi_{2}(s,\cdot)\). Since we need a uniform distribution of the nodes with respect to the variable *s*, we redistribute points using standard interpolation [49]. As the pattern grows or shrinks, points are added or removed so as to keep the arclength between consecutive points approximately constant.

### Appendix 4: Geometric Formulae for a Piece-Wise Constant Kernel

Consider a portion of a disk whose upper boundary is a (circular) arc and whose lower boundary is a chord making a central angle \(\phi_{0} < \pi\), illustrated as the shaded region in Fig. 6(A).

The area \(A=A(r_{0},\phi_{0})\) of the (shaded) segment is then simply given by the area of the circular sector (the entire wedge-shaped portion) minus the area of an isosceles triangle, namely

The area of the overlap of two circles, as illustrated in Fig. 6(B), can be constructed as the total area of \(A(r_{0},\phi_{0})+A(r_{1},\phi_{1})\). To determine the angles \(\phi_{0,1}\) in terms of the centres, \((x_{0},y_{0})\) and \((x_{1},y_{1})\), and radii, \(r_{0}\) and \(r_{1}\), of the two circles we use the cosine formula, which relates the lengths of the three sides of a triangle formed by joining the centres of the circles to a point of intersection. Denoting the distance between the two centres by *d* where \(d^{2} = (x_{0}-x_{1})^{2}+(y_{0}-y_{1})^{2}\),

Hence

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

#### Received

#### Accepted

#### Published

#### DOI

### Keywords

- Neural fields
- Bounded domain
- Dirichlet boundary condition
- Interface dynamics
- Piece-wise constant kernel