# From invasion to extinction in heterogeneous neural fields

- Paul C Bressloff
^{1}Email author

**2**:6

https://doi.org/10.1186/2190-8567-2-6

© Bressloff; licensee Springer. 2012

**Received: **15 December 2011

**Accepted: **26 March 2012

**Published: **26 March 2012

## Abstract

In this paper, we analyze the invasion and extinction of activity in heterogeneous neural fields. We first consider the effects of spatial heterogeneities on the propagation of an invasive activity front. In contrast to previous studies of front propagation in neural media, we assume that the front propagates into an unstable rather than a metastable zero-activity state. For sufficiently localized initial conditions, the asymptotic velocity of the resulting pulled front is given by the linear spreading velocity, which is determined by linearizing about the unstable state within the leading edge of the front. One of the characteristic features of these so-called pulled fronts is their sensitivity to perturbations inside the leading edge. This means that standard perturbation methods for studying the effects of spatial heterogeneities or external noise fluctuations break down. We show how to extend a partial differential equation method for analyzing pulled fronts in slowly modulated environments to the case of neural fields with slowly modulated synaptic weights. The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding local Hamilton-Jacobi equation. We use steepest descents to derive the Hamilton-Jacobi equation from the original nonlocal neural field equation. In the case of weak synaptic heterogenities, we then use perturbation theory to solve the corresponding Hamilton equations and thus determine the time-dependent wave speed. In the second part of the paper, we investigate how time-dependent heterogenities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity. In this case, the most probable path to extinction can be obtained by solving the classical equations of motion that dominate a path integral representation of the stochastic neural field in the weak noise limit. These equations take the form of nonlocal Hamilton equations in an infinite-dimensional phase space.

## Keywords

## 1 Introduction

Reaction-diffusion equations based on the Fisher-Kolmogorov-Petrovskii-Piskunov (F-KPP) model and its generalizations have been used extensively to describe the spatial spread of invading species including plants, insects, diseases, and genes in terms of propagating fronts [1–7]. One fundamental result in the theory of deterministic fronts is the difference between fronts propagating into a linearly unstable (zero) state and those propagating into a metastable state (a state that is linearly stable but nonlinearly unstable). In the latter case, the front has a unique velocity that is obtained by solving the associated partial differential equation (PDE) in traveling wave coordinates. The former, on the other hand, supports a continuum of possible velocities and associated traveling wave solutions; the particular velocity selected depends on the initial conditions. Fronts propagating into unstable states can be further partitioned into two broad categories: the so-called *pulled* and *pushed* fronts [8] emerging from sufficiently localized initial conditions. Pulled fronts propagate into an unstable state such that the asymptotic velocity is given by the linear spreading speed *υ**, which is determined by linearizing about the unstable state within the leading edge of the front. That is, perturbations around the unstable state within the leading edge grow and spread with speed *υ**, thus 'pulling along' the rest of the front. On the other hand, pushed fronts propagate into an unstable state with a speed greater than *υ**, and it is the nonlinear growth within the region behind the leading edge that pushes the front speeds to higher values. One of the characteristic features of pulled fronts is their sensitivity to perturbations in the leading edge of the wave [9]. This means that standard perturbation methods for studying the effects of spatial heterogeneities [10] or external noise fluctuations [11] break down.

Nevertheless, a number of analytical and numerical methods have been developed to study propagating invasive fronts in heterogeneous media. Heterogeneity is often incorporated by assuming that the diffusion coefficient and the growth rate of a population are periodically varying functions of space. One of the simplest examples of a single population model in a periodic environment was proposed by Shigesada et al. [5, 12], in which two different homogeneous patches are arranged alternately in one-dimensional space so that the diffusion coefficient and the growth rate are given by periodic step functions. The authors showed how an invading population starting from a localized perturbation evolves to a traveling periodic wave in the form of a pulsating front. By linearizing around the leading edge of the wave, they also showed how the minimal wave speed of the pulsating front could be estimated by finding solutions of a corresponding Hill equation [12]. The theory of pulsating fronts has also been developed in a more general and rigorous setting [13–15]. An alternative method for analyzing fronts in heterogeneous media, which is applicable to slowly modulated environments, was originally developed by Freidlin [16–18] using large deviation theory and subsequently reformulated in terms of PDEs by Evans and Sougandis [19]. More recently, it has been used to study waves in heterogeneous media (see for example [10, 13]). The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding Hamilton-Jacobi equation.

Another important topic in population biology is estimating the time to extinction of a population in the presence of weak intrinsic or extrinsic noise sources, after having successfully invaded a given spatial domain [20]. The zero state (which is unstable in the deterministic limit) now acts as an absorbing state, which can be reached via noise-induced transitions from the nontrivial metastable steady state. The most probable path to extinction is determined in terms of classical solutions of an effective Hamiltonian dynamical system [21–24]. The latter can be obtained in the weak noise limit by considering a Wentzel-Kramers-Brillouin (WKB) approximation of solutions to a master equation (intrinsic noise) or Fokker-Planck equation (extrinsic noise) [25–27]; alternatively, the Hamilton equations can be obtained from a corresponding path integral representation of the stochastic population model [21].

In this paper, we extend the Hamiltonian-based approaches to invasion and extinction in reaction-diffusion models to the case of a scalar neural field model. Neural fields represent the large-scale dynamics of spatially structured networks of neurons in terms of nonlinear integrodifferential equations, whose associated kernels represent the spatial distribution of neuronal synaptic connections. Such models provide an important example of spatially extended dynamical systems with nonlocal interactions. As in the case of reaction diffusion systems, neural fields can exhibit a rich repertoire of wave phenomena, including solitary traveling fronts, pulses, and spiral waves [28–30]. They have been used to model wave propagation in cortical slices [31, 32] and *in vivo* [33]. A common *in vitro* experimental method for studying wave propagation is to remove a slice of brain tissue and bathe it in a pharmacological medium that blocks the effects of inhibition. Synchronized discharges can then be evoked by a weak electrical stimulus to a local site on the slice, and each discharge propagates away from the stimulus at a characteristic speed of about 60 to 90 mm/s [32, 34]. These waves typically take the form of traveling pulses with the decay of activity at the trailing edge resulting from some form of local adaptation or refractory process. On the other hand, a number of phenomena in visual perception involve the propagation of a traveling front, in which a suppressed visual percept replaces a dominant percept within the visual field of an observer. A classical example is the wave-like propagation of perceptual dominance during binocular rivalry [35–38].

In the case of a scalar neural field equation with purely excitatory connections and a sigmoidal or Heaviside firing rate function, it can be proven that there exists a traveling front solution with a unique speed that depends on the firing threshold and the range/strength of synaptic weights [39, 40]. The wave, thus, has characteristics typical of a front propagating into a metastable state. Various generalizations of such front solutions have also been developed in order to take into account the effects of network in-homogeneities [41–43], external stimuli [44, 45], and network competition in a model of binocular rivalry waves [38]. As far as we are aware, however, there has been very little work on neural fields supporting pulled fronts, except for an analysis of pulsating pulled fronts in [42]. One possible motivation for considering such fronts is that they arise naturally in the deterministic limit of stochastic neural fields with a zero absorbing state. Indeed, Buice and Cowan [46] have previously used path integral methods and renormalization group theory to establish that a stochastic neural field with an absorbing state belongs to the universality class of directed percolation models and consequently exhibits power law behavior suggestive of several measurements of spontaneous cortical activity *in vitro* and *in vivo* [47, 48].

We begin by considering a neural field model that supports propagating pulled fronts, and determine the asymptotic wave speed by linearizing about the unstable state within the leading edge of the front (Section 2). We then introduce a spatial heterogeneity in the form of a slow modulation in the strength of synaptic connections (Section 3). Using a WKB approximation of the solution of a rescaled version of the neural field equation and carrying out steepest descents, we derive a local Hamilton-Jacobi equation for the dynamics of the sharp interface (in rescaled space-time coordinates). We then use perturbation methods to solve the associated Hamilton equations under the assumption that the amplitude of the spatial modulations is sufficiently small (Section 4). The resulting solution determines the location of the front as a function of time, from which the instantaneous speed of the front can be calculated. In the case of linearly varying modulations, the position of the front is approximately a quadratic function of time. In the second part of the paper, we investigate how time-dependent heterogeneities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity (on a shorter time scale, noise results in a subdiffusive wandering of the front [49]). We proceed by first constructing a path integral representation of the stochastic neural field (Section 5). The most probable path to extinction can then be obtained by solving the classical equations of motion that dominate the path integral representation in the weak noise limit; these equations take the form of nonlocal Hamilton equations in an infinite-dimensional phase space (Section 6).

## 2 Neural fields with propagating pulled fronts

*x*∈ ℝ. Here, the field

*a*(

*x, t*) represents the instantaneous firing rate of a local population of neurons at position

*x*and time

*t, w*(

*x*) is the distribution of synaptic weights,

*τ*is a time constant, and

*F*is a nonlinear activation function (for a detailed discussion of different neural field representations and their derivations, see the reviews [28, 30]). We also have the additional constraint that

*a*(

*x, t*)

*≥*0 for all (

*x, t*). Note that the restriction to positive values of

*a*is a feature shared with population models in ecology or evolutionary biology, for example, where the corresponding dependent variables represent number densities. Indeed, Equation 2.1 has certain similarities with a nonlocal version of the F-KPP equation, which takes the form [50]

One major difference from a mathematical perspective is that Equation 2.2 supports traveling fronts even when the range of the interaction kernel *K* goes to zero, that is, *K*(*x*) → *δ*(*x*), since we recover the standard local F-KPP equation [1, 2]. In particular, as the nonlocal interactions appear nonlinearly in Equation 2.2, they do not contribute to the linear spreading velocity in the leading edge of the front. On the other hand, nonlocal interactions play a necessary role in the generation of fronts in the neural field equation (Equation 2.1).

*F*(

*a*) in Equation 2.1 is a positive, bounded, monotonically increasing function of

*a*with

*F*(0) = 0, lim

_{a→0+}

*F*'(

*a*) = 1 and lim

_{a→∞}

*F*(

*a*) =

*κ*for some positive constant

*κ*. For concreteness, we take

*a** of Equation 2.1 satisfies

*W*

_{0}

*>*1, then there exists an unstable fixed point at

*a** = 0 (absorbing state) and a stable fixed point at

*a** =

*κ*(see Figure 1a). The construction of a front solution linking the stable and unstable fixed points differs considerably from that considered in neural fields with sigmoidal or Heaviside nonlinearities [28, 39], where the front propagates into a metastable state (see Figure 1b). Following the PDE theory of fronts propagating into unstable states [8], we expect that there will be a continuum of front velocities and associated traveling wave solutions. A conceptual framework for studying such solutions is the linear spreading velocity

*υ**, which is the asymptotic rate with which an initial localized perturbation spreads into an unstable state based on the linear equations obtained by linearizing the full nonlinear equations about the unstable state. Thus, we consider a traveling wave solution $\mathcal{A}\left(x-ct\right)$ of Equation 2.1 with $\mathcal{A}\left(\xi \right)\to \kappa $ as

*ξ*→ -∞ and $\mathcal{A}\left(\xi \right)\to 0$ as

*ξ*→

*∞*. One can determine the range of velocities

*c*for which such a solution exists by assuming that $\mathcal{A}\left(\xi \right)\approx {e}^{-\lambda \xi}$ for sufficiently large

*ξ*.

*τ*= 1), takes the form

*ξ*' to the leading edge of the front. Suppose, for example, that

*w*(

*x*) is given by the Gaussian distribution,

*X*with

*σ*≪

*X*≪

*ξ*and approximate Equation 2.5 by

*c*=

*c*(

*λ*) with

*X*→ ∞ under the assumption that

*w*(

*y*) is an even function to yield

*w*(

*x*):

*w*(

*y*) decays sufficiently fast as

*|y|*→ ∞ so that the Laplace transform$\hat{W}\left(\lambda \right)$ exists for bounded, negative values of

*λ*. This holds in the case Gaussian distribution (Equation 2.6). In particular,

*W*

_{0}

*>*1 (necessary for the zero-activity state to be unstable), then

*c*(

*λ*) is a positive unimodal function with

*c*(

*λ*) → ∞ as

*λ*→ 0 or

*λ*→ ∞ and a unique minimum at

*λ*=

*λ*

_{0}with

*λ*

_{0}the solution to the implicit

*W*

_{0}. Combining Equations 2.12 and 2.13 shows that

Assuming that the full nonlinear system supports a pulled front, then a sufficiently localized initial perturbation (one that decays faster than ${\mathsf{\text{e}}}^{-{\lambda}_{0}x}$) will asymptotically approach the traveling front solution with the minimum wave speed *c*_{0} = *c*(*λ*_{0}). Note that *c*_{0} ~ *σ* and *λ*_{0} ~ *σ*^{-1}. In Figure 2b, we show an asymptotic front profile obtained by numerically solving the neural field equation (Equation 2.1) when *W*_{0} = 1.2 (see Section 4.3). The corresponding displacement of the front is a linear function of time with a slope consistent with the minimal wave speed *c*_{0} *≈* 0.7 of the corresponding dispersion curve shown in Figure 2a. This wave speed is independent of *κ*.

*L*of the spatial domain satisfies

*L*≫

*σ*, where

*σ*is the range of synaptic weights. In the case of a finite domain, following passage of an invasive activity front, the network settles into a nonzero stable steady state, whose spatial structure will depend on the boundary conditions. The steady-state equation takes the form

*a*(0,

*t*) =

*a*(

*L, t*) = 0 with

*L*≫

*σ*, the steady state will be uniform in the bulk of the domain with

*a*(

*x*)

*≈ a*

_{0}except for boundary layers at both ends. Here,

*a*

_{0}is the nonzero solution to the equation

*a*

_{0}=

*F*(

*W*

_{0}

*a*

_{0}). Examples of steady-state solutions are plotted for various choices of

*L*in Figure 3. (Note that the sudden drop to zero right on the boundaries reflects the nonlocal nature of the neural field equation.) Next, we consider a periodic domain with

*w*(

*x*) =

*w*(

*x*+

*L*) for all

*x*∈ [0,

*L*]. One way to generate such a weight distribution is to take $w\left(x\right)={\sum}_{n\in \mathbb{Z}}{w}_{0}\left(x+nL\right)$ with

*w*

_{0}(

*x*) given by the Gaussian distribution of Equation 2.6. In this case, there is a unique nontrivial steady-state solution

*a*(

*x*) =

*a*

_{0}for all

*x*. For both choices of boundary condition, there still exists the unstable zero-activity state

*a*(

*x*)

*≡*0. Now, we suppose that some source of multiplicative noise is added to the neural field equation, which vanishes when

*a*(

*x, t*) = 0. It is then possible that noise-induced fluctuations drive the system to the zero-activity state, resulting in the extinction of all activity since the noise also vanishes there. Assuming that the noise is weak, the time to extinction will be exponentially large, so it is very unlikely to occur during the passage of the invasive front.

In the remainder of the paper, we consider two distinct problems associated with the presence of an unstable zero-activity state in the neural field model (Equation 2.1). First, how do slowly varying spatial heterogeneities in the synaptic weights affect the speed of propagating pulled fronts (Sections 3 and 4)? Second, what is the estimated time for extinction of a stable steady state in the presence of multiplicative extrinsic noise (Sections 5 and 6)? Both problems are linked from a methodological perspective since the corresponding analysis reduces to solving an effective Hamiltonian dynamical system.

## 3 Hamilton-Jacobi dynamics and slow spatial heterogeneities

*w*(

*x, y*) =

*w*(|

*x*-

*y*|). This implies translation symmetry of the underlying integrodifferential equations (in an unbounded or periodic domain). However, if one looks more closely at the anatomy of the cortex, it is clear that it is far from homogeneous, having a structure at multiple spatial scales. For example, to a first approximation, the primary visual cortex (V1) has a periodic-like microstructure on the millimeter length scale, reflecting the existence of various stimulus feature maps. This has motivated a number of studies concerned with the effects of a periodically modulated weight distribution on front propagation in neural fields [41–43, 51]. A typical example of a periodically modulated weight distribution is

where 2*πε* is the period of the modulation with *K*(*x*) = *K*(*x* + 2*π*) for all *x*. In the case of a sigmoidal or Heaviside nonlinearity, two alternative methods for analyzing the effects of periodic wave modulation have been used: one is based on homogenization theory for small *ε* [41, 51], and the other is based on analyzing interfacial dynamics [42, 43]. Both approaches make use of the observation that for sufficiently small amplitude modulations, numerical simulations of the inhomogeneous network show a front-like wave separating high and low activity metastable states. However, the wave does not propagate with constant speed but oscillates periodically in an appropriately chosen moving frame. This pulsating front solution satisfies the periodicity condition *a*(*x, t*) = *a*(*x* + *Δ, t* + *T*), so we can define the mean speed of the wave to be *c* = *Δ/T*.

*pulled*fronts in a neural field with periodically modulated weights, extending the previous work of Shigesada et al. on reaction-diffusion models of the spatial spread of invading species into heterogeneous environments [5, 12]. We briefly sketch the basic steps in the analysis. First, we substitute the periodically modulated weight distribution (Equation 3.1) into Equation 2.1 and linearize about the leading edge of the wave where

*a*(

*x, t*) ~ 0:

*a*(

*x, t*) =

*A*(

*ξ*)

*P*(

*x*),

*ξ*=

*x*-

*ct*with

*A*(

*ξ*) → 0 as

*ξ*→ ∞ and

*P*(

*x*+ 2

*πε*) =

*P*(

*x*). Substitution into Equation 3.2 then gives

*A*(

*ξ*) ~ e

^{-λξ}and substituting into the above equation yield a nonlocal version of the Hill equation:

*P*(

*x*) of Equation 3.4, which yields a corresponding dispersion relation

*c*=

*c*(

*λ*), whose minimum with respect to

*λ*can then be determined (assuming it exists). One way to obtain an approximate solution to Equation 3.4 is to use Fourier methods to derive an infinite matrix equation for the Fourier coefficients of the periodic function

*P*(

*x*), and then to numerically solve a finite truncated version of the matrix equation. This is the approach followed in [42]. The matrix equation takes the form

*K*(

*x/ε*) = Σ

_{ n }

*K*

_{ n }e

^{ imx/ε }and

*P*(

*x*) = Σ

_{ n }

*P*

_{ n }e

^{ imx/ε }. One finds that the mean velocity of a pulsating front increases with the period 2

*πε*of the synaptic modulations [42]. This is illustrated in Figure 4, where we show space-time plots of a pulsating front for

*ε*= 0.5 and

*ε*= 0.8.

in which there is a slow (nonperiodic) spatial modulation *J*(*εx*) of the synaptic weight distribution with *ε* ≪ 1. The synaptic heterogeneity is assumed to occur on a longer spatial scale than the periodic-like microstructures associated with stimulus feature maps. Although we do not have a specific example of long wavelength modulations in mind, we conjecture that these might be associated with inter-area cortical connections. For example, it has been shown that heterogeneities arise as one approaches the V1/V2 border in the visual cortex, which has a number of effects including the generation of reflected waves [52]. It is not yet clear how sharp is the transition across the V1/V2 border.

*t*→

*t/ε*and

*x*→

*x/ε*[10, 18, 19]:

*a*(

*x, t*) rapidly increases as

*x*decreases from infinity becomes a step as

*ε*→ 0 (see Figure 2b). This motivates introducing the WKB approximation

*G*(

*x, t*)

*>*0 for all

*x > x*(

*t*) and

*G*(

*x*(

*t*),

*t*) = 0. The point

*x*(

*t*) determines the location of the front and

*c*=

*ẋ*. Substituting Equation 3.8 into Equation 3.7 gives

We have used the fact that for *x > x*(*t*) and *ε* ≪ 1, the solution is in the leading edge of the front, so we can take *F* to be linear.

*w*(

*x*) according to

*ε*is small, we perform steepest descents with respect to the

*x'*variable with (

*k, x, t*) fixed. Let

*x'*=

*z*(

*k, t*) denote the stationary point for which

*∂S/∂x'*= 0, which is given by the solution to the implicit equation

*S*about this point (assuming it is unique) gives to second order

*x*' yield the result

*k*can also be evaluated using steepest descents. Thus, we Taylor expand

*Ŝ*to second order about the stationary point

*k*=

*k*(

*x, t*), which is the solution to the equation

*k*give to leading order in

*ε*

*x*' =

*z*(

*k, t*) in Equation 3.13 and differentiating with respect to

*k*show that

*∂*

_{ xx }

*G*(

*z*(

*k, t*),

*t*)

*∂*

_{ k }

*z*(

*k, t*) = -

*i*; hence,

*p*=

*∂*

_{ x }

*G*is interpreted as the conjugate momentum of

*x*, and

*w*(

*x*). It follows that the Hamilton-Jacobi equation (Equation 3.23) can be solved in terms of the Hamilton equations

*X*(

*s*;

*x, t*) and

*P*(

*s*;

*x, t*) denote the solution with

*x*(0) = 0 and

*x*(

*t*) =

*x*. We can then determine

*G*(

*x, t*) according to

which is independent of *s* due to conservation of 'energy,' that is, the Hamiltonian is not an explicit function of time.

## 4 Calculation of wave speed

### 4.1 Homogeneous case: *J*(*x*) *≡* 1

*J*(

*x*)

*≡*1. In this case,

*dp/ds*= 0, so

*p*=

*λ*

_{0}independently of

*s*. Hence,

*x*(

*s*) =

*xs/t*, which implies that

*x*(

*t*) of the front at time

*t*is determined by the equation

*G*(

*x*(

*t*),

*t*) = 0. Differentiating with respect to

*t*shows that $\dot{x}{\partial}_{x}G+{\partial}_{t}G=0$ or

*λ*

_{0}is given by the minimum of the function

and *c*_{0} = *c*(*λ*_{0}). This recovers the result of Section 2. Thus, in the case of a Gaussian weight distribution, *λ*_{0} is related to *c*_{0} according to Equation 2.15.

### 4.2 Synaptic heterogeneity: *J*(*x*) = 1 + *β f*(*x*), 0 *β*≪ 1

*J*(

*x*) = 1 +

*βf*(

*x*) with

*β*≪ 1. We can then obtain an approximate solution of Hamilton's equations (Equations 3.26 and 3.27) and the corresponding wave speed using regular perturbation theory along analogous lines to a previous study of the F-KPP equation [10]. We introduce the perturbation expansions

*f*(

*x*) about

*x*

_{0}and $\mathcal{W}\left(p\right)=\hat{W}\left(p\right)+\hat{W}\left(-p\right)$ about

*p*

_{0}then leads to a hierarchy of equations, the first two of which are

*x*

_{0}(0) = 0,

*x*

_{0}(

*t*) =

*x*, and

*x*

_{ n }(0) =

*x*

_{ n }(

*t*) = 0 for all integers

*n ≥*1. Equations 4.5 have solutions of the form

*λ*and

*B*

_{0}independent of

*s*. Imposing the Cauchy data then implies that

*B*

_{0}= 0 and

*λ*satisfy the equation

*A*

_{1}and

*B*

_{1}independent of

*s*. Imposing the Cauchy data then implies that

*B*

_{1}= 0 and

*E*(

*x, t*) is

*x*

_{0}(

*s*) and

*p*

_{1}(

*s*) and using the condition ${\mathcal{W}}^{\prime}\left(\lambda \right)=x/t$, we find that

*s*as expected. Similarly,

*β*,

*c*by imposing the condition

*G*(

*x*(

*t*),

*t*) = 0 and performing the perturbation expansions $x\left(t\right)={x}_{0}\left(t\right)\beta {x}_{1}\left(t\right)+\mathcal{O}\left({\beta}^{2}\right)$ and $\lambda ={\lambda}_{0}+\beta {\lambda}_{1}+\mathcal{O}\left({\beta}^{2}\right)$. Substituting into Equation 4.15 and collecting terms at $\mathcal{O}\left(1\right)$ and $\mathcal{O}\left(\beta \right)$ lead to the following result:

*c*

_{0}is the wave speed of the homogeneous neural field (

*β*= 0), which is given by

*c*

_{0}=

*c*(

*λ*

_{0}) with

*λ*

_{0}obtained by minimizing the function

*c*(

*λ*) defined by Equation 4.3 (see Equation 2.15). Finally, differentiating both sides with respect to

*t*and inverting the hyperbolic scaling yield

### 4.3 Numerical results

*L ≤ × ≤ L*with free boundary conditions and an initial condition given by a steep sigmoid

*η*= 5, where

*l*determines the approximate initial position of the front. For concreteness,

*L*= 100 and

*l*= 10. Space and time units are fixed by setting the range of synaptic weights

*σ*= 1 and the time constant

*τ*= 1. In Figure 5a, we show snapshots of a pulled front in the case of a homogeneous network with Gaussian weights (Equation 2.6) and piecewise linear firing rate function (Equation 2.3). A corresponding space-time plot is given in Figure 5b, which illustrates that the speed of the front asymptotically approaches the calculated minimal wave speed

*c*

_{0}. (Note that pulled fronts take an extremely long time to approach the minimal wave speed at high levels of numerical accuracy since the asymptotics are algebraic rather than exponential in time [53].) In Figures 6 and 7, we plot the corresponding results in the case of an inhomogeneous network. For the sake of illustration, we choose the synaptic heterogeneity to be a linear function of displacement, that is,

*J*(

*x*) = 1 +

*ε*(

*x - l*), where for the sake of illustration, we have set

*β*=

*ε*. Equation 4.16 implies that

where we have used Equation 4.3. We assume that the initial position of the front is *x*(0) = *l*. Hence, our perturbation theory predicts that a linearly increasing modulation in synaptic weights results in the leading edge of the front tracing out a downward parabola in a space-time plot for times $t\ll \mathcal{O}\left(1/{\epsilon}^{2}\right)$. This is consistent with our numerical simulations for *ε*^{2} = 0.005, as can be seen in the space-time plot of Figure 6b. However, our approximation for the time-dependent speed breaks down when $t=\mathcal{O}\left(1/{\epsilon}^{2}\right)$, as illustrated in Figure 7c for *ε*^{2} = 0.01.

## 5 Path integral formulation of a stochastic neural field

*≤ t ≤ T*and initial condition

*A*(

*x*, 0) =

*Φ*(

*x*). We take

*dW*(

*x, t*) to represent an independent Wiener process such that

Here, *λ* is the spatial correlation length of the noise such that *C*(*x/λ*) → *δ*(*x*) in the limit *λ* → 0, and *ε* determines the strength of the noise, which is assumed to be weak. We will assume that *g*(0) = 0, so the zero-activity state *A* = 0 is an absorbing state of the system; any noise-induced transition to this state would then result in the extinction of all activity. An example of multiplicative noise that vanishes at *A* = 0 is obtained by carrying out a diffusion approximation of the neural master equation previously introduced by Buice et al. [46, 54] (see Bressloff [55, 56]). Before considering the problem of extinction, a more immediate question is how multiplicative noise affects the propagation of the invasive pulled fronts analyzed in Section 2. Since this particular issue is addressed in some detail elsewhere [49], we only summarize the main findings here.

*Δ*(

*t*) ~

*t*

^{1/2}, in contrast to the diffusive wandering of a front propagating into a metastable state for which var

*Δ*(

*t*) ~

*t*[11]. Such scaling is a consequence of the asymptotic relaxation of the leading edge of the deterministic pulled front. Since pulled front solutions of the neural field equation (Equation 2.1) exhibit similar dynamics, it suggests that there will also be subdiffusive wandering of these fronts in the presence of multiplicative noise. This is indeed found to be the case [49]. More specifically, the multiplicative noise term in Equation 5.1 generates two distinct phenomena that occur on different time scales: a diffusive-like displacement of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. This can be captured by expressing the solution

*A*of Equation 5.1 as a combination of a fixed wave profile

*A*

_{0}that is displaced by an amount

*Δ*(

*t*) from its uniformly translating mean position

*ξ*=

*x - ct*, and a time-dependent fluctuation

*Φ*in the front shape about the instantaneous position of the front:

*c*denotes the mean speed of the front. (In the Stratonovich version of multiplicative noise, there is an

*ε*-dependent shift in the speed

*c*.) Numerical simulations of Equation 5.1 with

*F*given by the piecewise linear firing rate (Equation 2.3) and

*g*(

*A*) =

*A*are consistent with subdiffusive wandering of the front, as illustrated in Figure 8. The variance ${\sigma}_{X}^{2}\left(t\right)$ is obtained by averaging over level sets [49]. That is, we determine the positions

*X*

_{ z }(

*t*) such that

*A*(

*X*

_{ z }(

*t*),

*t*) =

*z*for various level set values

*z*∈ (0.5

*κ*, 1.3

*κ*) and then define the mean location to be $\overline{X}\left(t\right)=\mathbb{E}\left[{X}_{z}\left(t\right)\right]$, where the expectation is first taken with respect to the sampled values

*z*and then averaged over

*N*trials. The corresponding variance is given by ${\sigma}_{X}^{2}\left(t\right)=\mathbb{E}\left[{\left({X}_{z}\left(t\right)-\overline{X}\left(t\right)\right)}^{2}\right]$. It can be seen that the variance exhibits subdiffusive behavior over long time scales.

*A*

_{ i, m }=

*A*(

*mΔd, iΔt*),

*W*

_{ i, m }=

*Δd*

^{-1/2}

*W*(

*mΔd, iΔt*), and

*Δdw*

_{ mn }=

*w*(

*mΔd, nΔd*) gives

*i*= 0, 1, ...,

*N, T*=

*NΔt*, and

**A**and

**W**denote the vectors with components

*A*

_{ i, m }and

*W*

_{ i, m }, respectively. Formally, the conditional probability density function for

**A**given a particular realization of the stochastic process

**W**and initial condition

*A*

_{0, m}=

*Φ*

_{ m }is

*W*

_{ i, n }has the probability density function $P\left({W}_{i,m}\right)={\left(2\pi L\right)}^{-1/2}{e}^{-{W}_{i,m}^{2}/2L}$. Hence, setting

*W*

_{ j, n }by completing the square, we obtain the result

*Δd →*0 and

*Δt →*0,

*N → ∞*for fixed

*T*with

*A*

_{ i, m }

*→ A*(

*x, t*) and $i,{\stackrel{\u0303}{A}}_{i,m}/\Delta d\to \stackrel{\u0303}{A}\left(x,t\right)$ gives the following path integral representation of a stochastic neural field:

*P*[

*A*], we can write down path integral representations of various moments of the stochastic field

*A*. For example, the mean field is

*p*[

*A*

_{1},

*t|A*

_{0}, 0] for the initial state

*A*(

*x*, 0) =

*A*

_{0}(

*x*) to be in the final state

*A*(

*x, t*) =

*A*

_{1}(

*x*) at time

*t*, we have

## 6 Hamiltonian-Jacobi dynamics and population extinction in the weak-noise limit

We now use the path integral representation of the conditional probability (Equation 5.13) in order to estimate the time to extinction of a metastable nontrivial state. We proceed along analogous lines to our previous study of a neural master equation with *x*-independent steady states [55, 56], where the effective Hamiltonian dynamical system obtained by extremizing the associated path integral action can be used to determine the most probable or optimal path to the zero absorbing state. Alternatively, one could consider a WKB approximation of solutions to the corresponding functional Fokker-Planck equation or master equation, as previously applied to reaction-diffusion systems [21, 24–27]. The connection between the two approaches is discussed in [21].

*ε →*0, the path integral is dominated by the 'classical' solutions

*q*(

*x, t*),

*p*(

*x, t*), which extremize the exponent or action of the generating functional:

*q*is a 'coordinate' variable,

*p*is its 'conjugate momentum' and ℋ is the Hamiltonian functional. Substituting for ℋ leads to the explicit Hamilton equations:

It can be shown that *q*(*x, t*) and *p*(*x, t*) satisfy the same boundary conditions as the physical neural field *A*(*x, t*) [20]. Thus, in the case of periodic boundary conditions, *q*(*x*+*L, t*) = *q*(*x, t*) and *p*(*x*+*L, t*) = *p*(*x, t*). It also follows from the Hamiltonian structure of Equations 6.5 and 6.6 that there is an integral of motion given by the conserved 'energy' *E* = ℋ[*q, p*].

*p*(

*x, t*)

*≡*0, which implies that

*q*(

*x, t*) satisfies the deterministic scalar neural field equation (Equation 2.1). In the

*t*→

*∞*limit, the resulting trajectory in the infinite-dimensional phase space converges to the steady-state solution ${\mathcal{Q}}_{+}=\left[{q}_{s}\left(x\right),0\right]$, where

*q*

_{ s }(

*x*) satisfies Equation 2.16. The Hamiltonian formulation of extinction events then implies that the most probable path from [

*q*

_{ s }(

*x*), 0] to the absorbing state is the unique zero-energy trajectory that starts at ${\mathcal{Q}}_{+}$ at time

*t*= -∞ and approaches another fixed point $\mathcal{P}=\left[0,{p}_{e}\left(x\right)\right]$ at

*t*= +

*∞*[20, 21]. In other words, this so-called activation trajectory is a heteroclinic connection ${\mathcal{Q}}_{+}\mathcal{P}$ (or instanton solution) in the functional phase space [

*q*(

*x*),

*p*(

*x*)]. It can be seen from Equation 6.6 that the activation trajectory is given by the curve

*p*

_{ e }(

*x*) exists and is finite is equivalent to the condition that there exists a stationary solution to the underlying functional Fokker-Planck equation - this puts restrictions on the allowed form for

*g*. For the zero-energy trajectory emanating from ${\mathcal{Q}}_{+}$ at

*t*= -

*∞*, the corresponding action is given by

*τ*

_{ e }to extinction from the steady-state solution

*q*

_{ s }(

*x*) is given by [20, 21]

*x*-dependent steady-state solutions

*q*

_{ s }(

*x*), which occur for the Dirichlet boundary conditions and finite

*L*, one has to solve Equations 6.5 and 6.6 numerically. Here, we will consider the simpler case of

*x*-independent solutions, which occur for periodic boundary conditions or the Dirichlet boundary conditions in the large

*L*limit (where boundary layers can be neglected). In this case,

*q*(

*x, t*)

*→ q*(

*t*),

*p*(

*x, t*)

*→ p*(

*t*) so that

*x*-independent form of the Hamilton Equations 6.5 and 6.6:

*F*(

*q*) ~

*q*as

*q*→ 0, we have

*F*(

*q*) = tanh(

*q*) and multiplicative noise factor

*g*(

*q*) =

*βq*

^{ s }with

*β*constant. The zero-energy trajectories are highlighted as thicker curves. Let us first consider the case

*s*= 1/2 for which

*p*

_{ e }= 0.4

*β*

^{-2}(see Figure 9a). As expected, one zero-energy curve is the line

*p*= 0 along which Equation 6.12 reduces to the

*x*-independent version of Equation 2.1. If the dynamics were restricted to the one-dimensional manifold

*p*= 0, then the nonzero fixed point ${\mathcal{Q}}_{+}=\left({q}_{0},0\right)$ with

*q*

_{0}=

*F*(

*W*

_{0}

*q*

_{0}) would be stable. However, it becomes a saddle point of the full dynamics in the (

*q, p*) plane, reflecting the fact that it is metastable when fluctuations are taken into account. A second zero-energy curve is the absorbing line

*q*= 0 which includes two additional hyperbolic fixed points denoted by ${\mathcal{Q}}_{-}=\left(0,0\right)$ and $\mathcal{P}=\left(0,{p}_{e}\right)$ in Figure 9. ${\mathcal{Q}}_{-}$ occurs at the intersection with the line

*p*= 0 and corresponds to the unstable zero-activity state of the deterministic dynamics, whereas $\mathcal{P}$ is associated with the effects of fluctuations. Moreover, there exists a third zero-energy curve, which includes a heteroclinic trajectory joining ${\mathcal{Q}}_{-}$ at

*t*= -

*∞*to the fluctuational fixed point $\mathcal{P}$ at

*t*= +

*∞*. This heteroclinic trajectory represents the optimal (most probable) path linking the metastable fixed point to the absorbing boundary. The extinction time

*τ*

_{ e }is given by Equation 6.10 with ${\mathcal{S}}_{0}/L={\int}_{{Q}_{+}}^{P}pdq\approx 0.15$, where the integral is evaluated along the heteroclinic trajectory from

*Q*

_{+}to

*P*, which is equal to the area in the shaded regions of Figure 9a. For

*s <*1/2,

*p*

_{ e }= 0 and the optimal path is a heteroclinic connection from ${\mathcal{Q}}_{+}$ to ${\mathcal{Q}}_{-}$. Hence, ${\mathcal{S}}_{0}/L={\int}_{{\mathcal{Q}}_{+}}^{{\mathcal{Q}}_{-}}pdq\approx 0.05$.

Note that since the extinction time is exponentially large in the weak noise limit, it is very sensitive to the precise form of the action *S*_{0} and thus the Hamiltonian ℋ. This implies that when approximatingthe neural master equation of Buice et al. [46, 54] by a Langevin equation in the form of Equation 5.1 with $\sigma ~1/\sqrt{N}$, where *N* is the system size, the resulting Hamiltonian differs from that obtained directly from the master equation and can thus generate a poor estimate of the extinction time. This can be shown either by comparing the path integral representations of the generating functional for both stochastic processes or by comparing the WKB approximation of the master equation and corresponding Fokker-Planck equation. This particular issue is discussed elsewhere for neural field equations [55, 56].

## 7 Discussion

In this paper, we have explored some of the consequences of having an unstable zero-activity state in a scalar neural field model. First, we considered invasive activity fronts propagating into the unstable zero state. These waves exhibit a behavior analogous to pulled fronts in reaction-diffusion systems, in which the wave speed is determined by the spreading velocity within the leading edge of the front. Consequently, front propagation is sensitive to perturbations in the leading edge, which we investigated within the context of spatial heterogeneities in the synaptic weights. We showed how time-dependent corrections to the wave speed could be estimated using a Hamilton-Jacobi theory of sharp interface dynamics combined with perturbation theory. Second, we considered a stochastic version of the neural field model, in which the zero-activity fixed point acts as an absorbing state. By constructing an integral representation of the neural Langevin equation, we showed how to estimate the time to extinction from a nontrivial steady state using a Hamiltonian formulation of large fluctuation paths in the weak noise limit.

It is clear from the results of this paper that neural fields with an unstable zero-activity state exhibit considerably different behaviors compared to models in which the zero state (or down state) is stable. Of course, one important question is whether or not real cortical networks ever exist in a regime where the down state is unstable. From a mathematical viewpoint, one has to choose a very specific form of the firing rate function *F* for such a state to occur (see Figure 1). Nevertheless, Buice and Cowan [46] have demonstrated that a stochastic neural field operating in a regime with an absorbing state belongs to the universality class of directed percolation models and consequently exhibits power law behavior suggestive of several measurements of spontaneous cortical activity *in vitro* and *in vivo* [47, 48]. On the other hand, the existence of power law behavior is still controversial [61]. Irrespective of these issues, exploring the connections between nonlocal neural field equations and reaction-diffusion PDES is likely to be of continued mathematical interest.

## Declarations

### Acknowledgements

This work was supported by the National Science Foundation (DMS 1120327).

## Authors’ Affiliations

## References

- Fisher RA:
**The wave of advance of advantageous genes.***Ann Eugenics*1937,**7:**353–369.MATHGoogle Scholar - Kolmogor A, Petrovsky I, Piscounoff N:
**Étude de l'équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique.***Moscow University Bull Math*1937,**1:**1–25.Google Scholar - Murray JD:
*Mathematical Biology*. Berlin: Springer; 1989.View ArticleMATHGoogle Scholar - Holmes EE, Lewis MA, Banks JE, Veit RR:
**Partial differential equations in ecology: spatial interactions and population dynamics.***Ecology*1994,**75:**17–29. 10.2307/1939378View ArticleGoogle Scholar - Shigesada N, Kawasaki K:
*Biological Invasions: Theory and Practice*. Oxford: Oxford University Press; 1997.Google Scholar - Cantrell RS, Cosner C:
*Spatial Ecology via Reaction-Diffusion Equations*. Chichester: Wiley; 2003.MATHGoogle Scholar - Volpert V, Petrovskii S:
**Reaction-diffusion waves in biology.***Physics of Life Reviews*2009,**6:**267–310. 10.1016/j.plrev.2009.10.002View ArticleGoogle Scholar - van Saarloos W:
**Front propagation into unstable states.***Phys Rep*2003,**386:**29–222. 10.1016/j.physrep.2003.08.001View ArticleMATHGoogle Scholar - Brunet E, Derrida B:
**Shift in the velocity of a front due to a cutoff.***Phys Rev E*1997,**56:**2597–2604. 10.1103/PhysRevE.56.2597MathSciNetView ArticleGoogle Scholar - Mendez V, Fort J, Rotstein HG, Fedotov S:
**Speed of reaction-diffusion fronts in spatially heterogeneous media.***Phys Rev E*2003,**68:**041105.MathSciNetView ArticleGoogle Scholar - Rocco A, Ebert U, van Saarloos W:
**Subdiffusive fluctuations of "pulled" fronts with multiplicative noise.***Phys Rev E*2000,**65:**R13.View ArticleGoogle Scholar - Shigesada N, Kawasaki K, Teramoto E: Traveling periodic waves in heterogeneous environments. Theor Popul Biol 30: 143–160.Google Scholar
- Xin J:
**Front propagation in heterogeneous media.***SIAM Rev*2000,**42:**161–230. 10.1137/S0036144599364296MathSciNetView ArticleMATHGoogle Scholar - Weinberger HF:
**On spreading speeds and traveling waves for growth and migration in a periodic habitat.***J Math Biol*2002,**45:**511–548. 10.1007/s00285-002-0169-3MathSciNetView ArticleMATHGoogle Scholar - Berestycki H, Hamel F, Roques L:
**Analysis of the periodically fragmented environment model: II-biological invasions and pulsating travelling fronts.***J Math Biol*2005,**51:**75–113. 10.1007/s00285-004-0313-3MathSciNetView ArticleMATHGoogle Scholar - Gartner J, Freidlin MI:
**On the propagation of concentration waves in periodic and random media.***Soviet Math Dokl*1979,**20:**1282–128.MathSciNetMATHGoogle Scholar - Freidlin MI:
**Limit theorems for large deviations and reaction-diffusion equations.***Ann Probab*1985,**13:**639–675. 10.1214/aop/1176992901MathSciNetView ArticleMATHGoogle Scholar - Freidlin MI:
**Geometric optics approach to reaction-diffusion equations.***SIAM J Appl Math*1986,**46:**222–232. 10.1137/0146016MathSciNetView ArticleMATHGoogle Scholar - Evans LC, Sougandis PE:
**A PDE approach to geometric optics for certain semilinear parabolic equations.***Ind Univ Math J*1989,**38:**141–172. 10.1512/iumj.1989.38.38007View ArticleMathSciNetGoogle Scholar - Ovaskainen O, Meerson B:
**Stochastic models of population extinction.***Trends Ecol and Evol*2010,**25:**643–652. 10.1016/j.tree.2010.07.009View ArticleGoogle Scholar - Elgart V, Kamanev A:
**Rare event statistics in reaction-diffusion systems.***Phys Rev E*2004,**70:**041106.MathSciNetView ArticleGoogle Scholar - Kamanev A, Meerson B:
**Extinction of an infectious disease: a large fluctuation in a nonequilibrium system.***Phys Rev E*2008,**77:**061107.View ArticleGoogle Scholar - Assaf M, Kamanev A, Meerson B:
**Population extinction risk in the aftermath of a catastrophic event.***Phys Rev E*2009,**79:**011127.View ArticleGoogle Scholar - Meerson B, Sasorov PV:
**Extinction rates of established spatial populations.***Phys Rev E*2011,**83:**011129.MathSciNetView ArticleGoogle Scholar - Freidlin MI, Wentzell AD:
*Random Perturbations of Dynamical Systems*. Springer, New York; 1984.View ArticleMATHGoogle Scholar - Dykman MI, Mori E, Ross J, Hunt PM:
**Large fluctuations and optimal paths in chemical kinetics.***J Chem Phys A*1994,**100:**5735–5750.View ArticleGoogle Scholar - Stein RS, Stein DL:
**Limiting exit location distribution in the stochastic exit problem.***SIAM J Appl Math*1997,**57:**752–790. 10.1137/S0036139994271753MathSciNetView ArticleMATHGoogle Scholar - Ermentrout GB:
**Neural networks as spatio-temporal pattern-forming systems.***Rep Prog Phy*1998,**61:**353–430. 10.1088/0034-4885/61/4/002View ArticleGoogle Scholar - Coombes S:
**Waves, bumps and patterns in neural field theories.***Biol Cybern*2005,**93:**91–108. 10.1007/s00422-005-0574-yMathSciNetView ArticleMATHGoogle Scholar - Bressloff PC:
**Spatiotemporal dynamics of continuum neural fields. Topical review.***J Phys A: Math Theor*2012,**45:**033001. 10.1088/1751-8113/45/3/033001MathSciNetView ArticleMATHGoogle Scholar - Pinto D, Ermentrout GB:
**Spatially structured activity in synaptically coupled neuronal networks: I. Traveling fronts and pulses.***SIAM J Appl Math*2001,**62:**206–225. 10.1137/S0036139900346453MathSciNetView ArticleMATHGoogle Scholar - Richardson KA, Schiff SJ, Gluckman BJ:
**Control of traveling waves in the mammalian cortex.***Phys Rev Lett*2005,**94:**028103.View ArticleGoogle Scholar - Huang X, Troy WC, Yang Q, Ma H, Laing CR, Schiff SJ, Wu J:
**Spiral waves in disinhibited mammalian neocortex.***J Neurosci*2004,**24:**9897–9902. 10.1523/JNEUROSCI.2705-04.2004View ArticleGoogle Scholar - Pinto D, Patrick SL, Huang WC, Connors BW:
**Initiation, propagation, and termination of epileptiform activity in rodent neocortex in vitro involve distinct mechanisms.***J Neurosci*2005,**25:**8131–8140. 10.1523/JNEUROSCI.2278-05.2005View ArticleGoogle Scholar - Wilson HR, Blake R, Lee SH:
**Dynamics of traveling waves in visual perception.***Nature*2001,**412:**907–910. 10.1038/35091066View ArticleGoogle Scholar - Lee SH, Blake R, Heeger DJ:
**Traveling waves of activity in primary visual cortex during binocular rivalry.***Nat Neurosci*2005,**8:**22–23. 10.1038/nn1365View ArticleGoogle Scholar - Kang M, Heeger DJ, Blake R:
**Periodic perturbations producing phase-locked fluctuations in visual perception.***J Vision*2009,**9:**1–12.Google Scholar - Bressloff PC, Webber MA: Neural field model of binocular rivalry waves. J Comput Neurosci, in press.Google Scholar
- Amari S:
**Dynamics of pattern formation in lateral inhibition type neural fields.***Biol Cybern*1977,**27:**77–87. 10.1007/BF00337259MathSciNetView ArticleMATHGoogle Scholar - Ermentrout GB, McLeod JB:
**Existence and uniqueness of travelling waves for a neural network.***Proc Roy Soc Edin A*1993,**123:**461–478. 10.1017/S030821050002583XMathSciNetView ArticleMATHGoogle Scholar - Bressloff PC:
**Traveling fronts and wave propagation failure in an inhomogeneous neural network.***Physica D*2001,**155:**83–100. 10.1016/S0167-2789(01)00266-4MathSciNetView ArticleMATHGoogle Scholar - Coombes S, Laing CR:
**Pulsating fronts in periodically modulated neural field models.***Phys Rev E*2011,**83:**011912.MathSciNetView ArticleGoogle Scholar - Coombes S, Laing CR, Schmidt H, Svanstedt N, Wyller JA: Waves in random neural media. Disc Cont Dyn Syst A, in press.Google Scholar
- Folias SE, Bressloff PC:
**Stimulus-locked traveling pulses and breathers in an excitatory neural network.***SIAM J Appl Math*2005,**65:**2067–2092. 10.1137/040615171MathSciNetView ArticleMATHGoogle Scholar - Ermentrout GB, Jalics JZ, Rubin JE:
**Stimulus-driven traveling solutions in continuum neuronal models with a general smooth firing rate functions.***SIAM J Appl Math*2010,**70:**3039–3054. 10.1137/090775737MathSciNetView ArticleMATHGoogle Scholar - Buice M, Cowan JD:
**Field-theoretic approach to fluctuation effects in neural networks.***Phys Rev E*2007,**75:**051919.MathSciNetView ArticleGoogle Scholar - Beggs JM, Plenz D:
**Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures.***J Neurosci*2004,**24:**5216. 10.1523/JNEUROSCI.0540-04.2004View ArticleGoogle Scholar - Plenz D, Thiagarajan TC:
**The organizing principles of neuronal avalanches: cell assemblies in the cortex?***Trends Neurosci*2007,**30:**101. 10.1016/j.tins.2007.01.005View ArticleGoogle Scholar - Bressloff PC, Webber MA: Front propagation in stochastic neural fields. SIAM J Appl Dyn Syst, in press.Google Scholar
- Gourley SA:
**Travelling front solutions of a nonlocal Fisher equation.***J Math Biol*2000,**41:**272–284. 10.1007/s002850000047MathSciNetView ArticleMATHGoogle Scholar - Kilpatrick ZP, Folias SE, Bressloff PC:
**Traveling pulses and wave propagation failure in an inhomogeneous neural network.***SIAM J Appl Dyn Syst*2008,**7:**161–185. 10.1137/070699214MathSciNetView ArticleMATHGoogle Scholar - Xu W, Huang X, Takagaki K, Wu J-Y:
**Compression and reflection of visually evoked cortical waves.***Neuron*2007,**55:**119–129. 10.1016/j.neuron.2007.06.016View ArticleGoogle Scholar - Ebert U, van Saarloos W:
**Front propagation into unstable states: universal algebraic convergence towards uniformly translating pulled fronts.***Physica D*2000,**146:**1–99. 10.1016/S0167-2789(00)00068-3MathSciNetView ArticleMATHGoogle Scholar - Buice M, Cowan JD, Chow CC:
**Systematic fluctuation expansion for neural network activity equations.***Neural Comp*2010,**22:**377–426. 10.1162/neco.2009.02-09-960MathSciNetView ArticleMATHGoogle Scholar - Bressloff PC:
**Stochastic neural field theory and the system-size expansion.***SIAM J Appl Math*2009,**70:**1488–1521.MathSciNetView ArticleMATHGoogle Scholar - Bressloff PC:
**Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics.***Phys Rev E*2010,**85:**051903.MathSciNetView ArticleGoogle Scholar - Gardiner CW:
*Stochastic Methods: A Handbook for the Natural and Social Sciences (Fourth Edition)*. Berlin: Springer; 2009.MATHGoogle Scholar - Justin JZ:
*Quantum Field Theory and Critical Phenomena (Fourth Edition*. Oxford: Oxford University Press; 2002.View ArticleGoogle Scholar - Tauber UC:
**Field-theory approaches to nonequilibrium dynamics.***Lecture Notes in Physics*2007,**716:**295–348. 10.1007/3-540-69684-9_7MathSciNetView ArticleMATHGoogle Scholar - Chow CC, Buice M:
**Path integral methods for stochastic differential equations.**2011. arXiv:1009.5966Google Scholar - Bedard C, Destexhe A:
**Macroscopic models of local field potentials the apparent 1/f noise in brain activity.***Biophys J*2009,**96:**2589–2603. 10.1016/j.bpj.2008.12.3951View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.