Open Access

From invasion to extinction in heterogeneous neural fields

The Journal of Mathematical Neuroscience20122:6

https://doi.org/10.1186/2190-8567-2-6

Received: 15 December 2011

Accepted: 26 March 2012

Published: 26 March 2012

Abstract

In this paper, we analyze the invasion and extinction of activity in heterogeneous neural fields. We first consider the effects of spatial heterogeneities on the propagation of an invasive activity front. In contrast to previous studies of front propagation in neural media, we assume that the front propagates into an unstable rather than a metastable zero-activity state. For sufficiently localized initial conditions, the asymptotic velocity of the resulting pulled front is given by the linear spreading velocity, which is determined by linearizing about the unstable state within the leading edge of the front. One of the characteristic features of these so-called pulled fronts is their sensitivity to perturbations inside the leading edge. This means that standard perturbation methods for studying the effects of spatial heterogeneities or external noise fluctuations break down. We show how to extend a partial differential equation method for analyzing pulled fronts in slowly modulated environments to the case of neural fields with slowly modulated synaptic weights. The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding local Hamilton-Jacobi equation. We use steepest descents to derive the Hamilton-Jacobi equation from the original nonlocal neural field equation. In the case of weak synaptic heterogenities, we then use perturbation theory to solve the corresponding Hamilton equations and thus determine the time-dependent wave speed. In the second part of the paper, we investigate how time-dependent heterogenities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity. In this case, the most probable path to extinction can be obtained by solving the classical equations of motion that dominate a path integral representation of the stochastic neural field in the weak noise limit. These equations take the form of nonlocal Hamilton equations in an infinite-dimensional phase space.

Keywords

neural fields invasive pulled fronts heterogeneous media Hamilton-Jacobi equation path integrals multiplicative noise large fluctuations population extinction.

1 Introduction

Reaction-diffusion equations based on the Fisher-Kolmogorov-Petrovskii-Piskunov (F-KPP) model and its generalizations have been used extensively to describe the spatial spread of invading species including plants, insects, diseases, and genes in terms of propagating fronts [17]. One fundamental result in the theory of deterministic fronts is the difference between fronts propagating into a linearly unstable (zero) state and those propagating into a metastable state (a state that is linearly stable but nonlinearly unstable). In the latter case, the front has a unique velocity that is obtained by solving the associated partial differential equation (PDE) in traveling wave coordinates. The former, on the other hand, supports a continuum of possible velocities and associated traveling wave solutions; the particular velocity selected depends on the initial conditions. Fronts propagating into unstable states can be further partitioned into two broad categories: the so-called pulled and pushed fronts [8] emerging from sufficiently localized initial conditions. Pulled fronts propagate into an unstable state such that the asymptotic velocity is given by the linear spreading speed υ*, which is determined by linearizing about the unstable state within the leading edge of the front. That is, perturbations around the unstable state within the leading edge grow and spread with speed υ*, thus 'pulling along' the rest of the front. On the other hand, pushed fronts propagate into an unstable state with a speed greater than υ*, and it is the nonlinear growth within the region behind the leading edge that pushes the front speeds to higher values. One of the characteristic features of pulled fronts is their sensitivity to perturbations in the leading edge of the wave [9]. This means that standard perturbation methods for studying the effects of spatial heterogeneities [10] or external noise fluctuations [11] break down.

Nevertheless, a number of analytical and numerical methods have been developed to study propagating invasive fronts in heterogeneous media. Heterogeneity is often incorporated by assuming that the diffusion coefficient and the growth rate of a population are periodically varying functions of space. One of the simplest examples of a single population model in a periodic environment was proposed by Shigesada et al. [5, 12], in which two different homogeneous patches are arranged alternately in one-dimensional space so that the diffusion coefficient and the growth rate are given by periodic step functions. The authors showed how an invading population starting from a localized perturbation evolves to a traveling periodic wave in the form of a pulsating front. By linearizing around the leading edge of the wave, they also showed how the minimal wave speed of the pulsating front could be estimated by finding solutions of a corresponding Hill equation [12]. The theory of pulsating fronts has also been developed in a more general and rigorous setting [1315]. An alternative method for analyzing fronts in heterogeneous media, which is applicable to slowly modulated environments, was originally developed by Freidlin [1618] using large deviation theory and subsequently reformulated in terms of PDEs by Evans and Sougandis [19]. More recently, it has been used to study waves in heterogeneous media (see for example [10, 13]). The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding Hamilton-Jacobi equation.

Another important topic in population biology is estimating the time to extinction of a population in the presence of weak intrinsic or extrinsic noise sources, after having successfully invaded a given spatial domain [20]. The zero state (which is unstable in the deterministic limit) now acts as an absorbing state, which can be reached via noise-induced transitions from the nontrivial metastable steady state. The most probable path to extinction is determined in terms of classical solutions of an effective Hamiltonian dynamical system [2124]. The latter can be obtained in the weak noise limit by considering a Wentzel-Kramers-Brillouin (WKB) approximation of solutions to a master equation (intrinsic noise) or Fokker-Planck equation (extrinsic noise) [2527]; alternatively, the Hamilton equations can be obtained from a corresponding path integral representation of the stochastic population model [21].

In this paper, we extend the Hamiltonian-based approaches to invasion and extinction in reaction-diffusion models to the case of a scalar neural field model. Neural fields represent the large-scale dynamics of spatially structured networks of neurons in terms of nonlinear integrodifferential equations, whose associated kernels represent the spatial distribution of neuronal synaptic connections. Such models provide an important example of spatially extended dynamical systems with nonlocal interactions. As in the case of reaction diffusion systems, neural fields can exhibit a rich repertoire of wave phenomena, including solitary traveling fronts, pulses, and spiral waves [2830]. They have been used to model wave propagation in cortical slices [31, 32] and in vivo [33]. A common in vitro experimental method for studying wave propagation is to remove a slice of brain tissue and bathe it in a pharmacological medium that blocks the effects of inhibition. Synchronized discharges can then be evoked by a weak electrical stimulus to a local site on the slice, and each discharge propagates away from the stimulus at a characteristic speed of about 60 to 90 mm/s [32, 34]. These waves typically take the form of traveling pulses with the decay of activity at the trailing edge resulting from some form of local adaptation or refractory process. On the other hand, a number of phenomena in visual perception involve the propagation of a traveling front, in which a suppressed visual percept replaces a dominant percept within the visual field of an observer. A classical example is the wave-like propagation of perceptual dominance during binocular rivalry [3538].

In the case of a scalar neural field equation with purely excitatory connections and a sigmoidal or Heaviside firing rate function, it can be proven that there exists a traveling front solution with a unique speed that depends on the firing threshold and the range/strength of synaptic weights [39, 40]. The wave, thus, has characteristics typical of a front propagating into a metastable state. Various generalizations of such front solutions have also been developed in order to take into account the effects of network in-homogeneities [4143], external stimuli [44, 45], and network competition in a model of binocular rivalry waves [38]. As far as we are aware, however, there has been very little work on neural fields supporting pulled fronts, except for an analysis of pulsating pulled fronts in [42]. One possible motivation for considering such fronts is that they arise naturally in the deterministic limit of stochastic neural fields with a zero absorbing state. Indeed, Buice and Cowan [46] have previously used path integral methods and renormalization group theory to establish that a stochastic neural field with an absorbing state belongs to the universality class of directed percolation models and consequently exhibits power law behavior suggestive of several measurements of spontaneous cortical activity in vitro and in vivo [47, 48].

We begin by considering a neural field model that supports propagating pulled fronts, and determine the asymptotic wave speed by linearizing about the unstable state within the leading edge of the front (Section 2). We then introduce a spatial heterogeneity in the form of a slow modulation in the strength of synaptic connections (Section 3). Using a WKB approximation of the solution of a rescaled version of the neural field equation and carrying out steepest descents, we derive a local Hamilton-Jacobi equation for the dynamics of the sharp interface (in rescaled space-time coordinates). We then use perturbation methods to solve the associated Hamilton equations under the assumption that the amplitude of the spatial modulations is sufficiently small (Section 4). The resulting solution determines the location of the front as a function of time, from which the instantaneous speed of the front can be calculated. In the case of linearly varying modulations, the position of the front is approximately a quadratic function of time. In the second part of the paper, we investigate how time-dependent heterogeneities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity (on a shorter time scale, noise results in a subdiffusive wandering of the front [49]). We proceed by first constructing a path integral representation of the stochastic neural field (Section 5). The most probable path to extinction can then be obtained by solving the classical equations of motion that dominate the path integral representation in the weak noise limit; these equations take the form of nonlocal Hamilton equations in an infinite-dimensional phase space (Section 6).

2 Neural fields with propagating pulled fronts

We begin by constructing a homogeneous neural field equation that supports a propagating pulled front, and calculate its wave speed. We will adopt the activity-based rather than voltage-based representation of a neural field by taking
τ a ( x , t ) t = - a ( x , t ) + F - w ( x - x ) a ( x , t ) d x .
(2.1)
For the moment, we consider an unbounded domain with x . Here, the field a(x, t) represents the instantaneous firing rate of a local population of neurons at position x and time t, w(x) is the distribution of synaptic weights, τ is a time constant, and F is a nonlinear activation function (for a detailed discussion of different neural field representations and their derivations, see the reviews [28, 30]). We also have the additional constraint that a(x, t) 0 for all (x, t). Note that the restriction to positive values of a is a feature shared with population models in ecology or evolutionary biology, for example, where the corresponding dependent variables represent number densities. Indeed, Equation 2.1 has certain similarities with a nonlocal version of the F-KPP equation, which takes the form [50]
τ p ( x , t ) t = D 2 p ( x , t ) x 2 + μ p ( x , t ) 1 - - K ( x - x ) p ( x , t ) d x .
(2.2)

One major difference from a mathematical perspective is that Equation 2.2 supports traveling fronts even when the range of the interaction kernel K goes to zero, that is, K(x) → δ(x), since we recover the standard local F-KPP equation [1, 2]. In particular, as the nonlocal interactions appear nonlinearly in Equation 2.2, they do not contribute to the linear spreading velocity in the leading edge of the front. On the other hand, nonlocal interactions play a necessary role in the generation of fronts in the neural field equation (Equation 2.1).

Suppose that F(a) in Equation 2.1 is a positive, bounded, monotonically increasing function of a with F(0) = 0, lima→0+F'(a) = 1 and lima→∞F(a) = κ for some positive constant κ. For concreteness, we take
F ( a ) = 0 , a 0 a , 0 < a κ κ , a > κ .
(2.3)
A homogeneous fixed point solution a* of Equation 2.1 satisfies
a * = F ( W 0 a * ) , W 0 = - w ( y ) d y .
(2.4)
In the case of the given piecewise linear firing rate function, we find that if W0 > 1, then there exists an unstable fixed point at a* = 0 (absorbing state) and a stable fixed point at a* = κ (see Figure 1a). The construction of a front solution linking the stable and unstable fixed points differs considerably from that considered in neural fields with sigmoidal or Heaviside nonlinearities [28, 39], where the front propagates into a metastable state (see Figure 1b). Following the PDE theory of fronts propagating into unstable states [8], we expect that there will be a continuum of front velocities and associated traveling wave solutions. A conceptual framework for studying such solutions is the linear spreading velocity υ*, which is the asymptotic rate with which an initial localized perturbation spreads into an unstable state based on the linear equations obtained by linearizing the full nonlinear equations about the unstable state. Thus, we consider a traveling wave solution A ( x - c t ) of Equation 2.1 with A ( ξ ) κ as ξ → -∞ and A ( ξ ) 0 as ξ. One can determine the range of velocities c for which such a solution exists by assuming that A ( ξ ) e - λ ξ for sufficiently large ξ.
Figure 1

Plots of firing rate function. Intercepts of y = F(W0 a) with a straight line y = a determine homogeneous fixed points. (a) Piecewise linear rate function (Equation 2.3) showing the existence of an unstable fixed point at a = 0 and a stable fixed point at a = κ. (b) Sigmoidal rate function F(a) = 2/(1 + e-2[a-κ]) showing the existence of two stable fixed points separated by an unstable fixed point.

The exponential decay of the front suggests that we linearize Equation 2.1, which, in traveling wave coordinates (with τ = 1), takes the form
- c d A ( ξ ) d ξ = - A ( ξ ) + - w ( ξ - ξ ) A ( ξ ) d ξ .
(2.5)
However, in order to make the substitution A ( ξ ) e - λ ξ , we need to restrict the integration domain of ξ' to the leading edge of the front. Suppose, for example, that w(x) is given by the Gaussian distribution,
w ( x ) = W 0 2 π σ 2 e - x 2 / 2 σ 2 .
(2.6)
Given the fact that the front solution A ( ξ ) is bounded, we introduce a cutoff X with σ X ξ and approximate Equation 2.5 by
- c d A ( ξ ) d ξ = - A ( ξ ) + ξ - X ξ + X w ( ξ - ξ ) A ( ξ ) d ξ .
(2.7)
Substituting the exponential solution in Equation 2.5 then yields the dispersion relation c = c(λ) with
c ( λ ) = 1 λ - X X w ( y ) e - λ y d y - 1 .
(2.8)
Finally, we now take the limit X → ∞ under the assumption that w(y) is an even function to yield
c ( λ ) = 1 λ W ( λ ) - 1 ,
(2.9)
where W ( λ ) = W ^ ( λ ) + W ^ ( - λ ) and W ^ ( λ ) is the Laplace transform of w(x):
W ^ ( λ ) = - w ( y ) e - λ y d y .
(2.10)
We are assuming that w(y) decays sufficiently fast as |y| → ∞ so that the Laplace transform W ^ ( λ ) exists for bounded, negative values of λ. This holds in the case Gaussian distribution (Equation 2.6). In particular,
W ( λ ) = - w ( y ) e - λ y d y = W 0 2 π σ 2 - e - y 2 / 2 σ 2 e - λ y d y = W 0 e λ 2 σ 2 / 2 .
(2.11)
Hence,
c ( λ ) = W 0 e λ 2 σ 2 / 2 - 1 λ
(2.12)
If W0 > 1 (necessary for the zero-activity state to be unstable), then c(λ) is a positive unimodal function with c(λ) → ∞ as λ → 0 or λ → ∞ and a unique minimum at λ = λ0 with λ0 the solution to the implicit
λ 0 2 = W 0 - e - λ 0 2 σ 2 / 2 σ 2 W 0 .
(2.13)
Example dispersion curves are shown in Figure 2a for various values of the Gaussian weight amplitude W0. Combining Equations 2.12 and 2.13 shows that
Figure 2

Velocity dispersion curves and asymptotic front profile. (a) Velocity dispersion curves c = c(λ) for a pulled front solution of the neural field equation (Equation 2.1) with piecewise linear firing rate function (Equation 2.3) and a Gaussian weight distribution (Equation 2.6) with amplitude W0. Here, σ = 1.0, κ = 0.4, and W0 = 1, 2, 1.5, 2.0, 2.5, 3.0. Black dots indicate a minimum wave speed c0 for each value of W0. (b) Asymptotic front profile in the case W0 = 1.2. Inset: linear displacement X(t) of a level set of the front as a function of time t.

c 0 λ 0 = σ 2 W 0 e λ 0 2 σ 2 / 2 = σ 2 ( λ 0 c 0 + 1 ) ,
(2.14)
so that
λ 0 = 1 2 - 1 c 0 + 1 c 0 2 + 4 σ 2 .
(2.15)

Assuming that the full nonlinear system supports a pulled front, then a sufficiently localized initial perturbation (one that decays faster than e - λ 0 x ) will asymptotically approach the traveling front solution with the minimum wave speed c0 = c(λ0). Note that c0 ~ σ and λ0 ~ σ-1. In Figure 2b, we show an asymptotic front profile obtained by numerically solving the neural field equation (Equation 2.1) when W0 = 1.2 (see Section 4.3). The corresponding displacement of the front is a linear function of time with a slope consistent with the minimal wave speed c0 0.7 of the corresponding dispersion curve shown in Figure 2a. This wave speed is independent of κ.

In the above analysis, we neglected the effects of boundary conditions on front propagation, which is a reasonable approximation if the size L of the spatial domain satisfies L σ, where σ is the range of synaptic weights. In the case of a finite domain, following passage of an invasive activity front, the network settles into a nonzero stable steady state, whose spatial structure will depend on the boundary conditions. The steady-state equation takes the form
a ( x ) = F 0 L w ( x - x ) a ( x ) d x .
(2.16)
In the case of the Dirichlet boundary conditions, a(0, t) = a(L, t) = 0 with L σ, the steady state will be uniform in the bulk of the domain with a(x) ≈ a0 except for boundary layers at both ends. Here, a0 is the nonzero solution to the equation a0 = F(W0 a0). Examples of steady-state solutions are plotted for various choices of L in Figure 3. (Note that the sudden drop to zero right on the boundaries reflects the nonlocal nature of the neural field equation.) Next, we consider a periodic domain with w(x) = w(x + L) for all x [0, L]. One way to generate such a weight distribution is to take w ( x ) = n w 0 ( x + n L ) with w0(x) given by the Gaussian distribution of Equation 2.6. In this case, there is a unique nontrivial steady-state solution a(x) = a0 for all x. For both choices of boundary condition, there still exists the unstable zero-activity state a(x) 0. Now, we suppose that some source of multiplicative noise is added to the neural field equation, which vanishes when a(x, t) = 0. It is then possible that noise-induced fluctuations drive the system to the zero-activity state, resulting in the extinction of all activity since the noise also vanishes there. Assuming that the noise is weak, the time to extinction will be exponentially large, so it is very unlikely to occur during the passage of the invasive front.
Figure 3

Stable steady-state solutions. Stable steady-state solution a(x, t) = A s (x) of neural field equation (Equation 2.1) on a finite spatial domain of length L with boundary conditions a(0, t) = a(L, t) = 0. Here, W0 = 1.2, σ = 1, and κ = 0.4. (a) L = 5. (b) L = 25. In the presence of multiplicative noise, fluctuations can drive the network to the zero absorbing state, resulting in the extinction of activity (see Section 6).

In the remainder of the paper, we consider two distinct problems associated with the presence of an unstable zero-activity state in the neural field model (Equation 2.1). First, how do slowly varying spatial heterogeneities in the synaptic weights affect the speed of propagating pulled fronts (Sections 3 and 4)? Second, what is the estimated time for extinction of a stable steady state in the presence of multiplicative extrinsic noise (Sections 5 and 6)? Both problems are linked from a methodological perspective since the corresponding analysis reduces to solving an effective Hamiltonian dynamical system.

3 Hamilton-Jacobi dynamics and slow spatial heterogeneities

Most studies of neural fields assume that the synaptic weight distribution only depends upon the distance between interacting populations, that is, w(x, y) = w(|x - y|). This implies translation symmetry of the underlying integrodifferential equations (in an unbounded or periodic domain). However, if one looks more closely at the anatomy of the cortex, it is clear that it is far from homogeneous, having a structure at multiple spatial scales. For example, to a first approximation, the primary visual cortex (V1) has a periodic-like microstructure on the millimeter length scale, reflecting the existence of various stimulus feature maps. This has motivated a number of studies concerned with the effects of a periodically modulated weight distribution on front propagation in neural fields [4143, 51]. A typical example of a periodically modulated weight distribution is
w ( x , y ) = w ( x - y ) [ 1 + K ( y / ε ) ] ,
(3.1)

where 2πε is the period of the modulation with K(x) = K(x + 2π) for all x. In the case of a sigmoidal or Heaviside nonlinearity, two alternative methods for analyzing the effects of periodic wave modulation have been used: one is based on homogenization theory for small ε [41, 51], and the other is based on analyzing interfacial dynamics [42, 43]. Both approaches make use of the observation that for sufficiently small amplitude modulations, numerical simulations of the inhomogeneous network show a front-like wave separating high and low activity metastable states. However, the wave does not propagate with constant speed but oscillates periodically in an appropriately chosen moving frame. This pulsating front solution satisfies the periodicity condition a(x, t) = a(x + Δ, t + T), so we can define the mean speed of the wave to be c = Δ/T.

Recently, Coombes and Laing [42] have analyzed the propagation of pulsating pulled fronts in a neural field with periodically modulated weights, extending the previous work of Shigesada et al. on reaction-diffusion models of the spatial spread of invading species into heterogeneous environments [5, 12]. We briefly sketch the basic steps in the analysis. First, we substitute the periodically modulated weight distribution (Equation 3.1) into Equation 2.1 and linearize about the leading edge of the wave where a(x, t) ~ 0:
a ( x , t ) t = - a ( x , t ) + - w ( x - y ) [ 1 + K ( y / ε ) ] a ( y , t ) d y .
(3.2)
Now, we assume a solution of the form a(x, t) = A(ξ)P(x), ξ = x - ct with A(ξ) → 0 as ξ → ∞ and P(x + 2πε) = P(x). Substitution into Equation 3.2 then gives
- c P ( x ) A ( ξ ) = - P ( x ) A ( ξ ) + - w ( x - y ) [ 1 + K ( y / ε ) ] P ( y ) A ( ξ - [ x - y ] ) d y .
(3.3)
Taking A(ξ) ~ e-λξand substituting into the above equation yield a nonlocal version of the Hill equation:
( 1 + c λ ) P ( x ) = - e λ [ x - y ] w ( x - y ) [ 1 + K ( y / ε ) ] P ( y ) d y .
(3.4)
In order to determine the minimal wave speed, it is necessary to find a bounded periodic solution P(x) of Equation 3.4, which yields a corresponding dispersion relation c = c(λ), whose minimum with respect to λ can then be determined (assuming it exists). One way to obtain an approximate solution to Equation 3.4 is to use Fourier methods to derive an infinite matrix equation for the Fourier coefficients of the periodic function P(x), and then to numerically solve a finite truncated version of the matrix equation. This is the approach followed in [42]. The matrix equation takes the form
( 1 + c λ ) P m = W ( λ - i m / ε ) P m + W ( λ - i m / ε ) l K l P m - l ,
(3.5)
where K(x/ε) = Σ n K n e imx/ε and P(x) = Σ n P n e imx/ε . One finds that the mean velocity of a pulsating front increases with the period 2πε of the synaptic modulations [42]. This is illustrated in Figure 4, where we show space-time plots of a pulsating front for ε = 0.5 and ε = 0.8.
Figure 4

Space-time contour plots of a pulsating front. Space-time contour plots of a pulsating front solution of the neural field equation (Equation 3.2) with piecewise linear firing rate function (Equation 2.3), Gaussian weight distribution (Equation 2.6), and a 2πε-periodic modulation of the synaptic weights, K(x) = cos(x/ε). (a) ε = 0.5 and (b) ε = 0.8. Other parameters are W0 = 1.2, σ = 1.0, and κ = 0.4.

In this section, we develop an alternative method for analyzing pulled fronts in heterogeneous neural fields, based upon the Hamilton-Jacobi dynamics of sharp interfaces, which is particularly applicable to slowly varying spatial modulations [10, 1619]. That is, we consider a heterogeneous version of the neural field equation (Equation 2.1) of the form
a ( x , t ) t = - a ( x , t ) + F - w ( x - x ) J ( ε x ) a ( x , t ) d x ,
(3.6)

in which there is a slow (nonperiodic) spatial modulation J(εx) of the synaptic weight distribution with ε 1. The synaptic heterogeneity is assumed to occur on a longer spatial scale than the periodic-like microstructures associated with stimulus feature maps. Although we do not have a specific example of long wavelength modulations in mind, we conjecture that these might be associated with inter-area cortical connections. For example, it has been shown that heterogeneities arise as one approaches the V1/V2 border in the visual cortex, which has a number of effects including the generation of reflected waves [52]. It is not yet clear how sharp is the transition across the V1/V2 border.

The first step in the Hamilton-Jacobi method is to rescale space and time in Equation 3.6 according to tt/ε and xx/ε [10, 18, 19]:
ε a ( x , t ) t = - a ( x , t ) + F 1 ε - w ( [ x - x ] / ε ) J ( x ) a ( x , t ) d x .
(3.7)
Under this hyperbolic rescaling, the front region where the activity a(x, t) rapidly increases as x decreases from infinity becomes a step as ε → 0 (see Figure 2b). This motivates introducing the WKB approximation
a ( x , t ) ~ e - G ( x , t ) / ε
(3.8)
with G(x, t) > 0 for all x > x(t) and G(x(t), t) = 0. The point x(t) determines the location of the front and c = . Substituting Equation 3.8 into Equation 3.7 gives
- t G ( x , t ) = - 1 + 1 ε - w ( [ x - x ] / ε ) J ( x ) e - [ G ( x , t ) - G ( x , t ) / ε d x .
(3.9)

We have used the fact that for x > x(t) and ε 1, the solution is in the leading edge of the front, so we can take F to be linear.

In order to simplify Equation 3.9, we use the method of steepest descents. First, we introduce the Fourier transform of the weight distribution w(x) according to
w ( x ) = - w ̃ ( k ) e i k x d k 2 π .
(3.10)
Substituting into Equation 3.9 and reversing the order of integration give
- t G ( x , t ) = - 1 + 1 ε - - w ̃ ( k ) J ( x ) e - S ( k , x ; x , t ) / ε d x d k 2 π ,
(3.11)
where
S ( k , x ; x , t ) = i k ( x - x ) + G ( x , t ) - G ( x , t ) .
(3.12)
Exploiting the fact that ε is small, we perform steepest descents with respect to the x' variable with (k, x, t) fixed. Let x' = z(k, t) denote the stationary point for which ∂S/∂x' = 0, which is given by the solution to the implicit equation
i k + x G ( x , t ) = 0 .
(3.13)
Taylor expanding S about this point (assuming it is unique) gives to second order
S ( k , x ; x , t ) S ( k , z ( k , t ) ; x , t ) + 1 2 2 S x 2 x = z ( k , t ) ( x - z ( k , t ) ) 2 = i k [ z ( k , t ) - x ] + G ( z ( k , t ) , t ) - G ( x , t ) - 1 2 x x G ( z ( k , t ) , t ) ( x - z ( k , t ) ) 2 .
(3.14)
Substituting into Equation 3.11 and performing the resulting Gaussian integral with respect to x' yield the result
- t G ( x , t ) = - 1 + 1 ε - 2 π ε x x G ( z ( k , t ) , t ) w ̃ ( k ) J ( z ( k , t ) ) × e - ( i k [ z ( k , t ) - x ] + G ( z ( k , t ) , t ) - G ( x , t ) ) / ε d k 2 π .
(3.15)
This can be rewritten in the form
- t G ( x , t ) = - 1 + 1 2 π ε - w ̃ ( k ) J ( z ( k , t ) ) e - S ^ ( k ; x , t ) / ε d k ,
(3.16)
where
S ^ ( k ; x , t ) = i k [ z ( k , t ) - x ] + G ( z ( k , t ) t ) - G ( x , t ) + ε 2 ln x x G ( z ( k , t ) , t ) .
(3.17)
The integral over k can also be evaluated using steepest descents. Thus, we Taylor expand Ŝ to second order about the stationary point k = k(x, t), which is the solution to the equation
0 = S ^ k = i [ z ( k , t ) - x ] + z ( k , t ) k i k + x G ( z ( k , t ) , t ) + ε 2 x x x G ( z ( k , t ) , t ) x x x G ( z ( k , t ) , t ) .
(3.18)
It follows from Equations 3.13 and 3.18 that z k x , t , t = x + O ε , so
k ( x , t ) = i x G ( x , t ) + O ( ε ) .
(3.19)
Moreover,
S ^ ( k ; x , t ) 1 2 2 S ^ k 2 k = k ( x , t ) ( k - k ( x , t ) ) 2 .
(3.20)
Substituting into Equation 3.16 and performing the Gaussian integral with respect to k give to leading order in ε
- t G ( x , t ) = - 1 + 1 i x x G ( x , t ) k z ( k ( x , t ) t ) w ̃ ( k ( x , t ) ) J ( x ) .
(3.21)
Finally, setting x' = z(k, t) in Equation 3.13 and differentiating with respect to k show that xx G(z(k, t), t) k z(k, t) = -i; hence,
- t G ( x , t ) = - 1 + w ̃ ( i x G ( x , t ) ) J ( x ) .
(3.22)
Equation 3.22 is formally equivalent to the Hamilton-Jacobi equation
t G + H ( x G , x ) = 0
(3.23)
with corresponding Hamiltonian
H ( p , x ) = - 1 + w ̃ ( i p ) J ( x ) ,
(3.24)
where p = x G is interpreted as the conjugate momentum of x, and
w ̃ ( i p ) = W ^ ( p ) + W ^ ( - p ) W ( p )
(3.25)
with W ^ ( p ) as the Laplace transform of w(x). It follows that the Hamilton-Jacobi equation (Equation 3.23) can be solved in terms of the Hamilton equations
d x d s = H p = J ( x ) W ( p ) = J ( x ) [ W ^ ( p ) - W ^ ( - p ) ]
(3.26)
d p d s = - H x = - J ( x ) W ( p ) .
(3.27)
Let X(s; x, t) and P (s; x, t) denote the solution with x(0) = 0 and x(t) = x. We can then determine G(x, t) according to
G ( x , t ) = - E ( x , t ) t + 0 t P ( s ; x , t ) X ˙ ( s ; x , t ) d s .
(3.28)
Here,
E ( x , t ) = H ( P ( s ; x , t ) , X ( s ; x , t ) ) ,
(3.29)

which is independent of s due to conservation of 'energy,' that is, the Hamiltonian is not an explicit function of time.

4 Calculation of wave speed

4.1 Homogeneous case: J(x) 1

Let us begin by rederiving the wave speed for a homogeneous neural field (see Section 2) by setting J(x) 1. In this case, dp/ds = 0, so p = λ0 independently of s. Hence, x(s) = xs/t, which implies that
W ( λ 0 ) = x t .
(4.1)
By construction, the location x(t) of the front at time t is determined by the equation G(x(t), t) = 0. Differentiating with respect to t shows that x ˙ x G + t G = 0 or
x ˙ = - t G x G = - 1 + W ( λ 0 ) λ 0 .
(4.2)
It follows that λ0 is given by the minimum of the function
c ( λ ) = - 1 + W ( λ ) λ
(4.3)

and c0 = c(λ0). This recovers the result of Section 2. Thus, in the case of a Gaussian weight distribution, λ0 is related to c0 according to Equation 2.15.

4.2 Synaptic heterogeneity: J(x) = 1 + β f(x), 0 β 1

Suppose that there exists a small amplitude, slow modulation of the synaptic weights J(x) = 1 + βf(x) with β 1. We can then obtain an approximate solution of Hamilton's equations (Equations 3.26 and 3.27) and the corresponding wave speed using regular perturbation theory along analogous lines to a previous study of the F-KPP equation [10]. We introduce the perturbation expansions
x ( s ) = x 0 ( s ) + β x 1 ( s ) + O ( β 2 ) , p ( s ) = p 0 ( s ) + β p 1 ( s ) + O ( β 2 )
(4.4)
and substitute into Equations 3.26 and 3.27. Taylor expanding the nonlinear function f(x) about x0 and W ( p ) = W ^ ( p ) + W ^ ( - p ) about p0 then leads to a hierarchy of equations, the first two of which are
p ˙ 0 ( s ) = 0 , x ˙ 0 ( s ) = W ( p 0 ) ,
(4.5)
and
p ˙ 1 ( s ) = - f ( x 0 ) W ( p 0 ) , x ˙ 1 ( s ) = W ( p 0 ) p 1 ( s ) + f ( x 0 ) W ( p 0 ) .
(4.6)
These are supplemented by the Cauchy conditions x0(0) = 0, x0(t) = x, and x n (0) = x n (t) = 0 for all integers n ≥ 1. Equations 4.5 have solutions of the form
p 0 ( s ) = λ , x 0 ( s ) = W ( λ ) s + B 0
(4.7)
with λ and B0 independent of s. Imposing the Cauchy data then implies that B0 = 0 and λ satisfy the equation
W ( λ ) = x t .
(4.8)
At the next order, we have the solutions
p 1 ( s ) = - W ( λ ) t x f ( x s / t ) + A 1 ,
(4.9)
x 1 ( s ) = - W ( λ ) W ( λ ) t 2 x 2 0 x s / t f ( y ) d y + 0 x s / t f ( y ) d y + W ( λ ) A 1 s + B 1 ,
(4.10)
with A1 and B1 independent of s. Imposing the Cauchy data then implies that B1 = 0 and
A 1 = A 1 ( x , t ) = W ( λ ) t x 2 0 x f ( y ) d y - 1 t W ( λ ) 0 x f ( y ) d y .
(4.11)
Given these solutions, the energy function E(x, t) is
E ( x , t ) = - 1 + [ 1 + β f ( x 0 + β x 1 + ) ] W ( λ + β p 1 + ) = - 1 + W ( λ ) + β [ W ( λ ) p 1 ( s ) + f ( x 0 ( s ) ) W ( λ ) ] + O ( β 2 ) .
(4.12)
Substituting for x0(s) and p1(s) and using the condition W ( λ ) = x t , we find that
E ( x , t ) = - 1 + W ( λ ) + β x t A 1 ( x , t ) + O ( β 2 ) ,
(4.13)
which is independent of s as expected. Similarly,
0 t p ( s ) x ˙ ( s ) d s = λ x + β W ( λ ) 0 t p 1 ( s ) d s + O ( β 2 ) = λ x + β W ( λ ) W ( λ ) 0 t [ x ˙ 1 ( s ) - W ( λ ) f W ( λ ) s ] d s + O ( β 2 ) = λ x - β W ( λ ) W ( λ ) 0 x f ( y ) d y + O ( β 2 ) .
(4.14)
Hence, to first order in β,
G ( x , t ) = t - W ( λ ) t + λ x - β W ( λ ) t x 0 x f ( y ) d y .
(4.15)
We can now determine the wave speed c by imposing the condition G(x(t), t) = 0 and performing the perturbation expansions x t = x 0 t β x 1 t + O β 2 and λ = λ 0 + β λ 1 + O β 2 . Substituting into Equation 4.15 and collecting terms at O 1 and O β lead to the following result:
x ( t ) = c 0 t + β W ( λ 0 ) c 0 λ 0 0 c 0 t f ( y ) d y + O ( β 2 ) .
(4.16)
Here, c0 is the wave speed of the homogeneous neural field (β = 0), which is given by c0 = c(λ0) with λ0 obtained by minimizing the function c(λ) defined by Equation 4.3 (see Equation 2.15). Finally, differentiating both sides with respect to t and inverting the hyperbolic scaling yield
c x ˙ ( t ) = c 0 + β W ( λ 0 ) λ 0 f ( ε c 0 t ) + O ( β 2 ) .
(4.17)

4.3 Numerical results

In order to compare our analytical results with computer simulations, we numerically solve a discretized version of the neural field equation (Equation 3.6) using a direct Euler scheme. We take -L ≤ × ≤ L with free boundary conditions and an initial condition given by a steep sigmoid
a ( x , 0 ) = 0 . 5 1 + exp ( - η ( x - l ) ) ,
(4.18)
with η = 5, where l determines the approximate initial position of the front. For concreteness, L = 100 and l = 10. Space and time units are fixed by setting the range of synaptic weights σ = 1 and the time constant τ = 1. In Figure 5a, we show snapshots of a pulled front in the case of a homogeneous network with Gaussian weights (Equation 2.6) and piecewise linear firing rate function (Equation 2.3). A corresponding space-time plot is given in Figure 5b, which illustrates that the speed of the front asymptotically approaches the calculated minimal wave speed c0. (Note that pulled fronts take an extremely long time to approach the minimal wave speed at high levels of numerical accuracy since the asymptotics are algebraic rather than exponential in time [53].) In Figures 6 and 7, we plot the corresponding results in the case of an inhomogeneous network. For the sake of illustration, we choose the synaptic heterogeneity to be a linear function of displacement, that is, J(x) = 1 + ε(x - l), where for the sake of illustration, we have set β = ε. Equation 4.16 implies that
Figure 5

Propagating front in a homogeneous network. Propagating front in a homogeneous network with Gaussian weights (Equation 2.6) and piecewise linear rate function (Equation 2.3). Parameter values are W0 = 1.2, σ = 1, and κ = 0.4. (a) Snapshots of wave profile at time intervals of width Δt = 5 from t = 10 to t = 40. (b) Space-time contour plot. Wave speed asymptotically approaches the minimum c0 of the velocity dispersion curve given by Equation 2.12.

Figure 6

Propagating front in a network with a linear heterogeneity ( ε 2 = 0.005). Propagating front in a network with a linear heterogeneity in the synaptic weights, J(x) = 1 + ε(x - l), l = 10, and ε2 = 0.005. Other parameters as in Figure 5. (a) Snapshots of wave profile at time intervals of width Δt = 5 from t = 10 to t = 40. (b) Space-time contour plot. Wave speed increases approximately linearly with time, so the position x(t) of the front evolves according to a downward parabola. Theoretical curve based on the perturbation calculation is shown by the solid curve. The trajectory of the front in the corresponding homogeneous case (see Figure 5b) is indicated by the dashed curve.

Figure 7

Propagating front in a network with a linear heterogeneity ( ε 2 = 0.01). Propagating front in a network with a linear heterogeneity in the synaptic weights, J(x) = 1+ε(x-l), l = 10, and ε2 = 0.01. Other parameters as in Figure 5. (a) Snapshots of wave profile at time intervals of width Δt = 5 from t = 10 to t = 40. (b) Space-time contour plot. Wave speed increases approximately linearly with time, so the position x(t) of the front evolves according to a downward parabola. Theoretical curve based on the perturbation calculation is shown by the solid curve. The trajectory of the front in the corresponding homogeneous case (see Figure 5b) is indicated by the dashed curve.

x ( t ) = l + c 0 t + ε 2 W ( λ 0 ) 2 c 0 λ 0 [ ( c 0 t ) 2 - 2 c 0 l t ] = l + c 0 - ε 2 l ( c 0 λ 0 + 1 ) λ 0 t + ε 2 c 0 ( c 0 λ 0 + 1 ) 2 λ 0 t 2 ,
(4.19)

where we have used Equation 4.3. We assume that the initial position of the front is x(0) = l. Hence, our perturbation theory predicts that a linearly increasing modulation in synaptic weights results in the leading edge of the front tracing out a downward parabola in a space-time plot for times t O 1 / ε 2 . This is consistent with our numerical simulations for ε2 = 0.005, as can be seen in the space-time plot of Figure 6b. However, our approximation for the time-dependent speed breaks down when t = O 1 / ε 2 , as illustrated in Figure 7c for ε2 = 0.01.

5 Path integral formulation of a stochastic neural field

We now turn to the second main topic of this paper, namely noise-induced transitions to extinction in the presence of a zero absorbing state. Therefore, let us modify the scalar neural field equation (Equation 2.1) by adding extrinsic multiplicative noise. The resulting Langevin equation is
d A = - A + F 0 L w ( x - y ) A ( y , t ) d y d t + ε g ( A ) d W ( x , t ) ,
(5.1)
for 0 ≤ t ≤ T and initial condition A(x, 0) = Φ(x). We take dW (x, t) to represent an independent Wiener process such that
d W ( x , t ) = 0 , d W ( x , t ) d W ( x , t ) = 2 L C ( [ x - x ] λ ) δ ( t - t ) d t d t .
(5.2)

Here, λ is the spatial correlation length of the noise such that C(x/λ) → δ(x) in the limit λ → 0, and ε determines the strength of the noise, which is assumed to be weak. We will assume that g(0) = 0, so the zero-activity state A = 0 is an absorbing state of the system; any noise-induced transition to this state would then result in the extinction of all activity. An example of multiplicative noise that vanishes at A = 0 is obtained by carrying out a diffusion approximation of the neural master equation previously introduced by Buice et al. [46, 54] (see Bressloff [55, 56]). Before considering the problem of extinction, a more immediate question is how multiplicative noise affects the propagation of the invasive pulled fronts analyzed in Section 2. Since this particular issue is addressed in some detail elsewhere [49], we only summarize the main findings here.

In the case of the F-KPP equation with multiplicative noise, it has previously been shown that the stochastic wandering of a pulled front about its mean position is subdiffusive with varΔ(t) ~ t1/2, in contrast to the diffusive wandering of a front propagating into a metastable state for which varΔ(t) ~ t [11]. Such scaling is a consequence of the asymptotic relaxation of the leading edge of the deterministic pulled front. Since pulled front solutions of the neural field equation (Equation 2.1) exhibit similar dynamics, it suggests that there will also be subdiffusive wandering of these fronts in the presence of multiplicative noise. This is indeed found to be the case [49]. More specifically, the multiplicative noise term in Equation 5.1 generates two distinct phenomena that occur on different time scales: a diffusive-like displacement of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. This can be captured by expressing the solution A of Equation 5.1 as a combination of a fixed wave profile A0 that is displaced by an amount Δ(t) from its uniformly translating mean position ξ = x - ct, and a time-dependent fluctuation Φ in the front shape about the instantaneous position of the front:
A ( x , t ) = A 0 ( ξ - Δ ( t ) ) + ε Φ ( ξ - Δ ( t ) , t ) .
(5.3)
Here, c denotes the mean speed of the front. (In the Stratonovich version of multiplicative noise, there is an ε-dependent shift in the speed c.) Numerical simulations of Equation 5.1 with F given by the piecewise linear firing rate (Equation 2.3) and g(A) = A are consistent with subdiffusive wandering of the front, as illustrated in Figure 8. The variance σ X 2 ( t ) is obtained by averaging over level sets [49]. That is, we determine the positions X z (t) such that A(X z (t), t) = z for various level set values z (0.5κ, 1.3κ) and then define the mean location to be X ¯ ( t ) = E [ X z ( t ) ] , where the expectation is first taken with respect to the sampled values z and then averaged over N trials. The corresponding variance is given by σ X 2 ( t ) = E [ X z ( t ) - X ¯ ( t ) 2 ] . It can be seen that the variance exhibits subdiffusive behavior over long time scales.
Figure 8

Plot of variance in position of a stochastic pulled front. Plot of variance σ X 2 ( t ) of the position of a stochastic pulled front as a function of time for noise amplitude ε = 0.005 and g ( A ) = A L . Other parameters are W0 = 1.2, κ = 0.8, and σ = 1.

In order to develop a framework to study rare extinction events in the weak noise limit, we construct a path integral representation of the stochastic Langevin equation (Equation 5.1). We will assume that the multiplicative noise is of Ito form [57]. For reviews on path integral methods for stochastic differential equations, see [5860]. Discretizing both space and time with A i, m = A(mΔd, iΔt), W i, m = Δd-1/2W(mΔd, iΔt), and Δdw mn = w(mΔd, nΔd) gives
A i + 1 , m - A i , m = A i , m + F Δ d n w m n A i , n Δ t + ε Δ t Δ d g ( A i , m ) d W i , m ,
with i = 0, 1, ..., N, T = NΔt, and
d W i , m = 0 , d W i , m d W i , m = L δ i , i δ m , m .
(5.4)
Let A and W denote the vectors with components A i, m and W i, m , respectively. Formally, the conditional probability density function for A given a particular realization of the stochastic process W and initial condition A0, m= Φ m is
P [ A | W ] = n i = 0 N δ ( A i + 1 , m - A i , m + [ A i , m - F ( Δ d n w m n A i , n ) ] Δ t - ε Δ t Δ d g ( A i , m ) d W i , m ) .
(5.5)
Inserting the Fourier representation of the Dirac delta function,
δ ( A i , m ) = 1 2 π e - i A ̃ i , m A i , m d A ̃ i , m ,
(5.6)
gives
P [ A | W ] = n j = 0 N d A ̃ j , n 2 π exp - i i , m A ̃ i , m ( A i + 1 , m - A i , m ) × exp - i i , m A ̃ i , m [ A i , m - F ( Δ d n w m n A i , n ) ] Δ t × exp i i , m A ̃ i , m ε Δ t Δ d g ( A i , m ) d W i , m .
(5.7)
For a Gaussian white noise process, W i, n has the probability density function P ( W i , m ) = ( 2 π L ) - 1 / 2 e - W i , m 2 / 2 L . Hence, setting
P [ A ] =  P [ A | W ] j , n P ( W j , n ) d W j , n
and performing the integration with respect to W j, n by completing the square, we obtain the result
P [ A ] = n j = 0 N d A ̃ j , n 2 π exp - i i , m A ̃ i , m ( A i + 1 , m - A i , m ) × exp - i i , m A ̃ i , m [ A i , m - F ( Δ d n w m n A i , n ) ] Δ t × exp ε 2 L i , m ( i A ̃ i , m ) 2 g 2 ( A i , m ) Δ t 2 Δ d .
(5.8)
Finally, taking the continuum limits Δd → 0 and Δt → 0, N → ∞ for fixed T with A i, m → A(x, t) and i , A ̃ i , m / Δ d A ̃ ( x , t ) gives the following path integral representation of a stochastic neural field:
P [ A ] =  D A ̃ e - S [ A , A ̃ ]
(5.9)
with
S [ A , A ̃ ] = 0 L d x 0 T d t A ̃ ( x , t ) [ A t ( x , t ) + A ( x , t ) - F w ( x - y ) A ( y , t ) d y - ε 2 L 2 A ̃ ( x , t ) g 2 ( A ( x , t ) ) ] .
(5.10)
Given the probability functional P[A], we can write down path integral representations of various moments of the stochastic field A. For example, the mean field is
A ( x , t ) = D A D A ̃ A ( x , t ) e - S [ A , A ̃ ] ,
(5.11)
whereas the two-point correlation is
A ( x , t ) A ( x , t ) = D [ A ] D A ̃ A ( x , t ) A ( x , t ) e - S [ A , Ã ] .
(5.12)
In particular, introducing the conditional probability p[A1, t|A0, 0] for the initial state A(x, 0) = A0(x) to be in the final state A(x, t) = A1(x) at time t, we have
p [ A 1 , t | A 0 , 0 ] x δ ( A 1 ( x ) - A ( x , t ) ) D A D A ̃ e - S [ A , A ̃ ] A ( 0 ) = A 0 A ( t ) = A 1 .
(5.13)

6 Hamiltonian-Jacobi dynamics and population extinction in the weak-noise limit

We now use the path integral representation of the conditional probability (Equation 5.13) in order to estimate the time to extinction of a metastable nontrivial state. We proceed along analogous lines to our previous study of a neural master equation with x-independent steady states [55, 56], where the effective Hamiltonian dynamical system obtained by extremizing the associated path integral action can be used to determine the most probable or optimal path to the zero absorbing state. Alternatively, one could consider a WKB approximation of solutions to the corresponding functional Fokker-Planck equation or master equation, as previously applied to reaction-diffusion systems [21, 2427]. The connection between the two approaches is discussed in [21].

The first step is to perform the rescaling A ̃ A ̃ / L ε 2 so that Equation 5.13 becomes
p [ A 1 , t | A 0 , 0 ] = D A D A ̃ e - S [ A , A ̃ ] L ε 2 A ( 0 ) = A 0 A ( t ) = A 1 .
(6.1)
In the limit ε → 0, the path integral is dominated by the 'classical' solutions q(x, t), p(x, t), which extremize the exponent or action of the generating functional:
δ S [ A , A ̃ ] δ A ( x , t ) A ̃ = p , A = q = 0 , δ S [ A , A ̃ ] δ A ̃ ( x , t ) A ̃ = p , A = q = 0
(6.2)
These equations reduce to the form
q ( x , t ) t = δ H ( q , p ) δ p ( x , t ) , p ( x , t ) t = δ H ( q , p ) δ q ( x , t ) ,
(6.3)
where we have set
S [ q , p ] =  0 T d t d x p ( x , t ) q ˙ ( x , t ) - H [ q , p ] ,
such that
H [ q , p ] = d x p ( x , t ) [ - q ( x , t ) + F w ( x - y ) q ( y , t ) d y + 1 2 p ( x , t ) g 2 ( q ( x , t ) ) ]
(6.4)
Equations 6.3 take the form of a Hamiltonian dynamical system in which q is a 'coordinate' variable, p is its 'conjugate momentum' and is the Hamiltonian functional. Substituting for leads to the explicit Hamilton equations:
q ( x , t ) t  =  - q ( x , t ) + F w ( x - y ) q ( y , t ) d y + p ( x , t ) g 2 ( q ( x , t ) )
(6.5)
p ( x , t ) t = - p ( x , t ) + F w ( y - z ) q ( z , t ) d z w ( y - x ) p ( y , t ) d y + p 2 ( x , t ) g ( q ( x , t ) ) g ( q ( x , t ) )
(6.6)

It can be shown that q(x, t) and p(x, t) satisfy the same boundary conditions as the physical neural field A(x, t) [20]. Thus, in the case of periodic boundary conditions, q(x+L, t) = q(x, t) and p(x+L, t) = p(x, t). It also follows from the Hamiltonian structure of Equations 6.5 and 6.6 that there is an integral of motion given by the conserved 'energy' E = [q, p].

The particular form of implies that one type of solution is the zero-energy solution p(x, t) 0, which implies that q(x, t) satisfies the deterministic scalar neural field equation (Equation 2.1). In the t limit, the resulting trajectory in the infinite-dimensional phase space converges to the steady-state solution Q + = [ q s ( x ) , 0 ] , where q s (x) satisfies Equation 2.16. The Hamiltonian formulation of extinction events then implies that the most probable path from [q s (x), 0] to the absorbing state is the unique zero-energy trajectory that starts at Q + at time t = -∞ and approaches another fixed point P = [ 0 , p e ( x ) ] at t = + [20, 21]. In other words, this so-called activation trajectory is a heteroclinic connection Q + P (or instanton solution) in the functional phase space [q(x), p(x)]. It can be seen from Equation 6.6 that the activation trajectory is given by the curve
p ( x ) = F x [ q ] - q ( x ) + F 0 L w ( x - y ) q ( y ) d y g ( q ( x ) ) 2
(6.7)
so that
p e ( x ) = lim q 0 F x [ q ] .
(6.8)
Note that the condition that p e (x) exists and is finite is equivalent to the condition that there exists a stationary solution to the underlying functional Fokker-Planck equation - this puts restrictions on the allowed form for g. For the zero-energy trajectory emanating from Q + at t = -, the corresponding action is given by
S 0 = - d t 0 L d x p ( x , t ) q ˙ ( x , t ) ,
(6.9)
and up to pre-exponential factors, the estimated time τ e to extinction from the steady-state solution q s (x) is given by [20, 21]
ln τ e ε - 2 S 0 L .
(6.10)
For x-dependent steady-state solutions q s (x), which occur for the Dirichlet boundary conditions and finite L, one has to solve Equations 6.5 and 6.6 numerically. Here, we will consider the simpler case of x-independent solutions, which occur for periodic boundary conditions or the Dirichlet boundary conditions in the large L limit (where boundary layers can be neglected). In this case, q(x, t) → q(t), p(x, t) → p(t) so that
H L = p ( t ) - q ( t ) + F ( W 0 q ( t ) ) + 1 2 p ( t ) g 2 ( q ( t ) ) ,
(6.11)
and the optimal path is determined by the x-independent form of the Hamilton Equations 6.5 and 6.6:
q ˙ = - q + F ( W 0 q ) + p g 2 ( q )
(6.12)
p ˙ = - p + W 0 F ( W 0 q ) p + p 2 g ( q ) g ( q ) .
(6.13)
Assuming that F(q) ~ q as q → 0, we have
p e = - 2 ( W 0 - 1 ) lim q 0 q g 2 ( q )
(6.14)
In Figure 9, we plot the various constant energy solutions of the Hamilton equations (Equations 6.12 and 6.13) for the differentiable rate function F(q) = tanh(q) and multiplicative noise factor g(q) = βq s with β constant. The zero-energy trajectories are highlighted as thicker curves. Let us first consider the case s = 1/2 for which p e = 0.4β-2 (see Figure 9a). As expected, one zero-energy curve is the line p = 0 along which Equation 6.12 reduces to the x-independent version of Equation 2.1. If the dynamics were restricted to the one-dimensional manifold p = 0, then the nonzero fixed point Q + = ( q 0 , 0 ) with q0 = F(W0 q0) would be stable. However, it becomes a saddle point of the full dynamics in the (q, p) plane, reflecting the fact that it is metastable when fluctuations are taken into account. A second zero-energy curve is the absorbing line q = 0 which includes two additional hyperbolic fixed points denoted by Q - = ( 0 , 0 ) and P = ( 0 , p e ) in Figure 9. Q - occurs at the intersection with the line p = 0 and corresponds to the unstable zero-activity state of the deterministic dynamics, whereas P is associated with the effects of fluctuations. Moreover, there exists a third zero-energy curve, which includes a heteroclinic trajectory joining Q - at t = - to the fluctuational fixed point P at t = +. This heteroclinic trajectory represents the optimal (most probable) path linking the metastable fixed point to the absorbing boundary. The extinction time τ e is given by Equation 6.10 with S 0 L = Q + P p d q 0 . 15 , where the integral is evaluated along the heteroclinic trajectory from Q+ to P , which is equal to the area in the shaded regions of Figure 9a. For s < 1/2, p e = 0 and the optimal path is a heteroclinic connection from Q + to Q - . Hence, S 0 L = Q + Q - p d q 0 . 05 .
Figure 9

Phase portrait of constant energy trajectories. Phase portrait of constant energy trajectories for the Hamiltonian system given by Equations 6.12 and 6.13 with F (q) = tanh(q) and g(q) = βq s . Parameters are W0 = 1.2 and β = 1. Zero-energy trajectories are indicated by thick curves. The stable and unstable fixed points of the mean-field dynamics are denoted by Q + and Q - . (a) s = 1/2: There exists a nonzero fluctuational fixed point P that is connected to Q + via a zero-energy heteroclinic connection. The curve Q+P is the optimal path from the metastable state to the absorbing state. (b) s = 1/4: There is no longer a fluctuational fixed point P , so the optimal path is a direct heteroclinic connection between Q + and Q - .

Note that since the extinction time is exponentially large in the weak noise limit, it is very sensitive to the precise form of the action S0 and thus the Hamiltonian . This implies that when approximatingthe neural master equation of Buice et al. [46, 54] by a Langevin equation in the form of Equation 5.1 with σ ~ 1 N , where N is the system size, the resulting Hamiltonian differs from that obtained directly from the master equation and can thus generate a poor estimate of the extinction time. This can be shown either by comparing the path integral representations of the generating functional for both stochastic processes or by comparing the WKB approximation of the master equation and corresponding Fokker-Planck equation. This particular issue is discussed elsewhere for neural field equations [55, 56].

7 Discussion

In this paper, we have explored some of the consequences of having an unstable zero-activity state in a scalar neural field model. First, we considered invasive activity fronts propagating into the unstable zero state. These waves exhibit a behavior analogous to pulled fronts in reaction-diffusion systems, in which the wave speed is determined by the spreading velocity within the leading edge of the front. Consequently, front propagation is sensitive to perturbations in the leading edge, which we investigated within the context of spatial heterogeneities in the synaptic weights. We showed how time-dependent corrections to the wave speed could be estimated using a Hamilton-Jacobi theory of sharp interface dynamics combined with perturbation theory. Second, we considered a stochastic version of the neural field model, in which the zero-activity fixed point acts as an absorbing state. By constructing an integral representation of the neural Langevin equation, we showed how to estimate the time to extinction from a nontrivial steady state using a Hamiltonian formulation of large fluctuation paths in the weak noise limit.

It is clear from the results of this paper that neural fields with an unstable zero-activity state exhibit considerably different behaviors compared to models in which the zero state (or down state) is stable. Of course, one important question is whether or not real cortical networks ever exist in a regime where the down state is unstable. From a mathematical viewpoint, one has to choose a very specific form of the firing rate function F for such a state to occur (see Figure 1). Nevertheless, Buice and Cowan [46] have demonstrated that a stochastic neural field operating in a regime with an absorbing state belongs to the universality class of directed percolation models and consequently exhibits power law behavior suggestive of several measurements of spontaneous cortical activity in vitro and in vivo [47, 48]. On the other hand, the existence of power law behavior is still controversial [61]. Irrespective of these issues, exploring the connections between nonlocal neural field equations and reaction-diffusion PDES is likely to be of continued mathematical interest.

Declarations

Acknowledgements

This work was supported by the National Science Foundation (DMS 1120327).

Authors’ Affiliations

(1)
Department of Mathematics, University of Utah

References

  1. Fisher RA: The wave of advance of advantageous genes. Ann Eugenics 1937, 7: 353–369.MATHGoogle Scholar
  2. Kolmogor A, Petrovsky I, Piscounoff N: Étude de l'équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique. Moscow University Bull Math 1937, 1: 1–25.Google Scholar
  3. Murray JD: Mathematical Biology. Berlin: Springer; 1989.View ArticleMATHGoogle Scholar
  4. Holmes EE, Lewis MA, Banks JE, Veit RR: Partial differential equations in ecology: spatial interactions and population dynamics. Ecology 1994, 75: 17–29. 10.2307/1939378View ArticleGoogle Scholar
  5. Shigesada N, Kawasaki K: Biological Invasions: Theory and Practice. Oxford: Oxford University Press; 1997.Google Scholar
  6. Cantrell RS, Cosner C: Spatial Ecology via Reaction-Diffusion Equations. Chichester: Wiley; 2003.MATHGoogle Scholar
  7. Volpert V, Petrovskii S: Reaction-diffusion waves in biology. Physics of Life Reviews 2009, 6: 267–310. 10.1016/j.plrev.2009.10.002View ArticleGoogle Scholar
  8. van Saarloos W: Front propagation into unstable states. Phys Rep 2003, 386: 29–222. 10.1016/j.physrep.2003.08.001View ArticleMATHGoogle Scholar
  9. Brunet E, Derrida B: Shift in the velocity of a front due to a cutoff. Phys Rev E 1997, 56: 2597–2604. 10.1103/PhysRevE.56.2597MathSciNetView ArticleGoogle Scholar
  10. Mendez V, Fort J, Rotstein HG, Fedotov S: Speed of reaction-diffusion fronts in spatially heterogeneous media. Phys Rev E 2003, 68: 041105.MathSciNetView ArticleGoogle Scholar
  11. Rocco A, Ebert U, van Saarloos W: Subdiffusive fluctuations of "pulled" fronts with multiplicative noise. Phys Rev E 2000, 65: R13.View ArticleGoogle Scholar
  12. Shigesada N, Kawasaki K, Teramoto E: Traveling periodic waves in heterogeneous environments. Theor Popul Biol 30: 143–160.Google Scholar
  13. Xin J: Front propagation in heterogeneous media. SIAM Rev 2000, 42: 161–230. 10.1137/S0036144599364296MathSciNetView ArticleMATHGoogle Scholar
  14. Weinberger HF: On spreading speeds and traveling waves for growth and migration in a periodic habitat. J Math Biol 2002, 45: 511–548. 10.1007/s00285-002-0169-3MathSciNetView ArticleMATHGoogle Scholar
  15. Berestycki H, Hamel F, Roques L: Analysis of the periodically fragmented environment model: II-biological invasions and pulsating travelling fronts. J Math Biol 2005, 51: 75–113. 10.1007/s00285-004-0313-3MathSciNetView ArticleMATHGoogle Scholar
  16. Gartner J, Freidlin MI: On the propagation of concentration waves in periodic and random media. Soviet Math Dokl 1979, 20: 1282–128.MathSciNetMATHGoogle Scholar
  17. Freidlin MI: Limit theorems for large deviations and reaction-diffusion equations. Ann Probab 1985, 13: 639–675. 10.1214/aop/1176992901MathSciNetView ArticleMATHGoogle Scholar
  18. Freidlin MI: Geometric optics approach to reaction-diffusion equations. SIAM J Appl Math 1986, 46: 222–232. 10.1137/0146016MathSciNetView ArticleMATHGoogle Scholar
  19. Evans LC, Sougandis PE: A PDE approach to geometric optics for certain semilinear parabolic equations. Ind Univ Math J 1989, 38: 141–172. 10.1512/iumj.1989.38.38007View ArticleMathSciNetGoogle Scholar
  20. Ovaskainen O, Meerson B: Stochastic models of population extinction. Trends Ecol and Evol 2010, 25: 643–652. 10.1016/j.tree.2010.07.009View ArticleGoogle Scholar
  21. Elgart V, Kamanev A: Rare event statistics in reaction-diffusion systems. Phys Rev E 2004, 70: 041106.MathSciNetView ArticleGoogle Scholar
  22. Kamanev A, Meerson B: Extinction of an infectious disease: a large fluctuation in a nonequilibrium system. Phys Rev E 2008, 77: 061107.View ArticleGoogle Scholar
  23. Assaf M, Kamanev A, Meerson B: Population extinction risk in the aftermath of a catastrophic event. Phys Rev E 2009, 79: 011127.View ArticleGoogle Scholar
  24. Meerson B, Sasorov PV: Extinction rates of established spatial populations. Phys Rev E 2011, 83: 011129.MathSciNetView ArticleGoogle Scholar
  25. Freidlin MI, Wentzell AD: Random Perturbations of Dynamical Systems. Springer, New York; 1984.View ArticleMATHGoogle Scholar
  26. Dykman MI, Mori E, Ross J, Hunt PM: Large fluctuations and optimal paths in chemical kinetics. J Chem Phys A 1994, 100: 5735–5750.View ArticleGoogle Scholar
  27. Stein RS, Stein DL: Limiting exit location distribution in the stochastic exit problem. SIAM J Appl Math 1997, 57: 752–790. 10.1137/S0036139994271753MathSciNetView ArticleMATHGoogle Scholar
  28. Ermentrout GB: Neural networks as spatio-temporal pattern-forming systems. Rep Prog Phy 1998, 61: 353–430. 10.1088/0034-4885/61/4/002View ArticleGoogle Scholar
  29. Coombes S: Waves, bumps and patterns in neural field theories. Biol Cybern 2005, 93: 91–108. 10.1007/s00422-005-0574-yMathSciNetView ArticleMATHGoogle Scholar
  30. Bressloff PC: Spatiotemporal dynamics of continuum neural fields. Topical review. J Phys A: Math Theor 2012, 45: 033001. 10.1088/1751-8113/45/3/033001MathSciNetView ArticleMATHGoogle Scholar
  31. Pinto D, Ermentrout GB: Spatially structured activity in synaptically coupled neuronal networks: I. Traveling fronts and pulses. SIAM J Appl Math 2001, 62: 206–225. 10.1137/S0036139900346453MathSciNetView ArticleMATHGoogle Scholar
  32. Richardson KA, Schiff SJ, Gluckman BJ: Control of traveling waves in the mammalian cortex. Phys Rev Lett 2005, 94: 028103.View ArticleGoogle Scholar
  33. Huang X, Troy WC, Yang Q, Ma H, Laing CR, Schiff SJ, Wu J: Spiral waves in disinhibited mammalian neocortex. J Neurosci 2004, 24: 9897–9902. 10.1523/JNEUROSCI.2705-04.2004View ArticleGoogle Scholar
  34. Pinto D, Patrick SL, Huang WC, Connors BW: Initiation, propagation, and termination of epileptiform activity in rodent neocortex in vitro involve distinct mechanisms. J Neurosci 2005, 25: 8131–8140. 10.1523/JNEUROSCI.2278-05.2005View ArticleGoogle Scholar
  35. Wilson HR, Blake R, Lee SH: Dynamics of traveling waves in visual perception. Nature 2001, 412: 907–910. 10.1038/35091066View ArticleGoogle Scholar
  36. Lee SH, Blake R, Heeger DJ: Traveling waves of activity in primary visual cortex during binocular rivalry. Nat Neurosci 2005, 8: 22–23. 10.1038/nn1365View ArticleGoogle Scholar
  37. Kang M, Heeger DJ, Blake R: Periodic perturbations producing phase-locked fluctuations in visual perception. J Vision 2009, 9: 1–12.Google Scholar
  38. Bressloff PC, Webber MA: Neural field model of binocular rivalry waves. J Comput Neurosci, in press.Google Scholar
  39. Amari S: Dynamics of pattern formation in lateral inhibition type neural fields. Biol Cybern 1977, 27: 77–87. 10.1007/BF00337259MathSciNetView ArticleMATHGoogle Scholar
  40. Ermentrout GB, McLeod JB: Existence and uniqueness of travelling waves for a neural network. Proc Roy Soc Edin A 1993, 123: 461–478. 10.1017/S030821050002583XMathSciNetView ArticleMATHGoogle Scholar
  41. Bressloff PC: Traveling fronts and wave propagation failure in an inhomogeneous neural network. Physica D 2001, 155: 83–100. 10.1016/S0167-2789(01)00266-4MathSciNetView ArticleMATHGoogle Scholar
  42. Coombes S, Laing CR: Pulsating fronts in periodically modulated neural field models. Phys Rev E 2011, 83: 011912.MathSciNetView ArticleGoogle Scholar
  43. Coombes S, Laing CR, Schmidt H, Svanstedt N, Wyller JA: Waves in random neural media. Disc Cont Dyn Syst A, in press.Google Scholar
  44. Folias SE, Bressloff PC: Stimulus-locked traveling pulses and breathers in an excitatory neural network. SIAM J Appl Math 2005, 65: 2067–2092. 10.1137/040615171MathSciNetView ArticleMATHGoogle Scholar
  45. Ermentrout GB, Jalics JZ, Rubin JE: Stimulus-driven traveling solutions in continuum neuronal models with a general smooth firing rate functions. SIAM J Appl Math 2010, 70: 3039–3054. 10.1137/090775737MathSciNetView ArticleMATHGoogle Scholar
  46. Buice M, Cowan JD: Field-theoretic approach to fluctuation effects in neural networks. Phys Rev E 2007, 75: 051919.MathSciNetView ArticleGoogle Scholar
  47. Beggs JM, Plenz D: Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures. J Neurosci 2004, 24: 5216. 10.1523/JNEUROSCI.0540-04.2004View ArticleGoogle Scholar
  48. Plenz D, Thiagarajan TC: The organizing principles of neuronal avalanches: cell assemblies in the cortex? Trends Neurosci 2007, 30: 101. 10.1016/j.tins.2007.01.005View ArticleGoogle Scholar
  49. Bressloff PC, Webber MA: Front propagation in stochastic neural fields. SIAM J Appl Dyn Syst, in press.Google Scholar
  50. Gourley SA: Travelling front solutions of a nonlocal Fisher equation. J Math Biol 2000, 41: 272–284. 10.1007/s002850000047MathSciNetView ArticleMATHGoogle Scholar
  51. Kilpatrick ZP, Folias SE, Bressloff PC: Traveling pulses and wave propagation failure in an inhomogeneous neural network. SIAM J Appl Dyn Syst 2008, 7: 161–185. 10.1137/070699214MathSciNetView ArticleMATHGoogle Scholar
  52. Xu W, Huang X, Takagaki K, Wu J-Y: Compression and reflection of visually evoked cortical waves. Neuron 2007, 55: 119–129. 10.1016/j.neuron.2007.06.016View ArticleGoogle Scholar
  53. Ebert U, van Saarloos W: Front propagation into unstable states: universal algebraic convergence towards uniformly translating pulled fronts. Physica D 2000, 146: 1–99. 10.1016/S0167-2789(00)00068-3MathSciNetView ArticleMATHGoogle Scholar
  54. Buice M, Cowan JD, Chow CC: Systematic fluctuation expansion for neural network activity equations. Neural Comp 2010, 22: 377–426. 10.1162/neco.2009.02-09-960MathSciNetView ArticleMATHGoogle Scholar
  55. Bressloff PC: Stochastic neural field theory and the system-size expansion. SIAM J Appl Math 2009, 70: 1488–1521.MathSciNetView ArticleMATHGoogle Scholar
  56. Bressloff PC: Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. Phys Rev E 2010, 85: 051903.MathSciNetView ArticleGoogle Scholar
  57. Gardiner CW: Stochastic Methods: A Handbook for the Natural and Social Sciences (Fourth Edition). Berlin: Springer; 2009.MATHGoogle Scholar
  58. Justin JZ: Quantum Field Theory and Critical Phenomena (Fourth Edition. Oxford: Oxford University Press; 2002.View ArticleGoogle Scholar
  59. Tauber UC: Field-theory approaches to nonequilibrium dynamics. Lecture Notes in Physics 2007, 716: 295–348. 10.1007/3-540-69684-9_7MathSciNetView ArticleMATHGoogle Scholar
  60. Chow CC, Buice M: Path integral methods for stochastic differential equations. 2011. arXiv:1009.5966Google Scholar
  61. Bedard C, Destexhe A: Macroscopic models of local field potentials the apparent 1/f noise in brain activity. Biophys J 2009, 96: 2589–2603. 10.1016/j.bpj.2008.12.3951View ArticleGoogle Scholar

Copyright

© Bressloff; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.