 Research
 Open Access
Stability of the stationary solutions of neural field equations with propagation delays
 Romain Veltz^{1, 2}Email author and
 Olivier Faugeras^{2}Email author
https://doi.org/10.1186/2190856711
© Veltz, Faugeras; licensee Springer 2011
Received: 22 October 2010
Accepted: 3 May 2011
Published: 3 May 2011
Abstract
In this paper, we consider neural field equations with spacedependent delays. Neural fields are continuous assemblies of mesoscopic models arising when modeling macroscopic parts of the brain. They are modeled by nonlinear integrodifferential equations. We rigorously prove, for the first time to our knowledge, sufficient conditions for the stability of their stationary solutions. We use two methods 1) the computation of the eigenvalues of the linear operator defined by the linearized equations and 2) the formulation of the problem as a fixed point problem. The first method involves tools of functional analysis and yields a new estimate of the semigroup of the previous linear operator using the eigenvalues of its infinitesimal generator. It yields a sufficient condition for stability which is independent of the characteristics of the delays. The second method allows us to find new sufficient conditions for the stability of stationary solutions which depend upon the values of the delays. These conditions are very easy to evaluate numerically. We illustrate the conservativeness of the bounds with a comparison with numerical simulation.
Keywords
1 Introduction
Neural fields equations first appeared as a spatialcontinuous extension of Hopfield networks with the seminal works of Wilson and Cowan, Amari [1, 2]. These networks describe the mean activity of neural populations by nonlinear integral equations and play an important role in the modeling of various cortical areas including the visual cortex. They have been modified to take into account several relevant biological mechanisms like spikefrequency adaptation [3, 4], the tuning properties of some populations [5] or the spatial organization of the populations of neurons [6]. In this work we focus on the role of the delays coming from the finitevelocity of signals in axons, dendrites or the time of synaptic transmission [7, 8]. It turns out that delayed neural fields equations feature some interesting mathematical difficulties. The main question we address in the sequel is that of determining, once the stationary states of a nondelayed neural field equation are wellunderstood, what changes, if any, are caused by the introduction of propagation delays? We think this question is important since nondelayed neural field equations are pretty well understood by now, at least in terms of their stationary solutions, but the same is not true for their delayed versions which in many cases are better models closer to experimental findings. A lot of work has been done concerning the role of delays in waves propagation or in the linear stability of stationary states but except in [9] the method used reduces to the computation of the eigenvalues (which we call characteristic values) of the linearized equation in some analytically convenient cases (see [10]). Some results are known in the case of a finite number of neurons [11, 12] and in the case of a few number of distinct delays [13, 14]: the dynamical portrait is highly intricated even in the case of two neurons with delayed connections.
The purpose of this article is to propose a solid mathematical framework to characterize the dynamical properties of neural field systems with propagation delays and to show that it allows us to find sufficient delaydependent bounds for the linear stability of the stationary states. This is a step in the direction of answering the question of how much delays can be introduced in a neural field model without destabilization. As a consequence one can infer in some cases without much extra work, from the analysis of a neural field model without propagation delays, the changes caused by the finite propagation times of signals. This framework also allows us to prove a linear stability principle to study the bifurcations of the solutions when varying the nonlinear gain and the propagation times.
The paper is organized as follows: in Section 2 we describe our model of delayed neural field, state our assumptions and prove that the resulting equations are wellposed and enjoy a unique bounded solution for all times. In Section 3 we give two different methods for expressing the linear stability of stationary cortical states, that is, of the time independent solutions of these equations. The first one, Section 3.1, is computationally intensive but accurate. The second one, Section 3.2, is much lighter in terms of computation but unfortunately leads to somewhat coarse approximations. Readers not interested in the theoretical and analytical developments can go directly to the summary of this section. We illustrate these abstract results in Section 4 by applying them to a detailed study of a simple but illuminating example.
2 The model
We give an interpretation of the various parameters and functions that appear in (1).
Ω is a finite piece of cortex and/or feature space and is represented as an open bounded set of ${\mathbf{R}}^{d}$. The vectors r and $\overline{\mathbf{r}}$ represent points in Ω.
It describes the relation between the firing rate ${\nu}_{i}$ of population i as a function of the membrane potential, for example, ${V}_{i}:{\nu}_{i}=S[{\sigma}_{i}({V}_{i}{h}_{i})]$. We note V the pdimensional vector $({V}_{1},\dots ,{V}_{p})$.
The p functions ${\varphi}_{i}$, $i=1,\dots ,p$, represent the initial conditions, see below. We note ϕ the pdimensional vector $({\varphi}_{1},\dots ,{\varphi}_{p})$.
The p functions ${I}_{i}^{\mathit{ext}}$, $i=1,\dots ,p$, represent external currents from other cortical areas. We note ${\mathbf{I}}^{\mathit{ext}}$ the pdimensional vector $({I}_{1}^{\mathit{ext}},\dots ,{I}_{p}^{\mathit{ext}})$.
The $p\times p$ matrix of functions $\mathbf{J}={\{{J}_{ij}\}}_{i,j=1,\dots ,p}$ represents the connectivity between populations i and j, see below.
The p real values ${h}_{i}$, $i=1,\dots ,p$, determine the threshold of activity for each population, that is, the value of the membrane potential corresponding to 50% of the maximal activity.
The p real positive values ${\sigma}_{i}$, $i=1,\dots ,p$, determine the slopes of the sigmoids at the origin.
Finally the p real positive values ${l}_{i}$, $i=1,\dots ,p$, determine the speed at which each membrane potential decreases exponentially toward its rest value.
We also introduce the function $\mathbf{S}:{\mathbf{R}}^{p}\to {\mathbf{R}}^{p}$, defined by $\mathbf{S}(\mathbf{x})=[S({\sigma}_{1}({x}_{1}{h}_{1})),\dots ,S({\sigma}_{p}({x}_{p}{h}_{p}))]$, and the diagonal $p\times p$ matrix ${\mathbf{L}}_{0}=diag({l}_{1},\dots ,{l}_{p})$.
A difference with other studies is the intrinsic dynamics of the population given by the linear response of chemical synapses. In [9, 15], $(\frac{d}{dt}+{l}_{i})$ is replaced by ${(\frac{d}{dt}+{l}_{i})}^{2}$ to use the alpha function synaptic response. We use $(\frac{d}{dt}+{l}_{i})$ for simplicity although our analysis applies to more general intrinsic dynamics, see Proposition 3.10 in Section 3.1.3.
For the sake of generality, the propagation delays are not assumed to be identical for all populations, hence they are described by a matrix $\mathit{\tau}(\mathbf{r},\overline{\mathbf{r}})$ whose element ${\tau}_{ij}(\mathbf{r},\overline{\mathbf{r}})$ is the propagation delay between population j at $\overline{\mathbf{r}}$ and population i at r. The reason for this assumption is that it is still unclear from physiology if propagation delays are independent of the populations. We assume for technical reasons that τ is continuous, that is, $\mathit{\tau}\in \phantom{\rule{0.25em}{0ex}}{C}^{0}({\overline{\Omega}}^{2},{\mathbf{R}}_{+}^{p\times p})$. Moreover biological data indicate that τ is not a symmetric function (that is, ${\tau}_{ij}(\mathbf{r},\overline{\mathbf{r}})\ne {\tau}_{ji}(\overline{\mathbf{r}},\mathbf{r})$), thus no assumption is made about this symmetry unless otherwise stated.
Hence we choose $T={\tau}_{m}$.
2.1 The propagationdelay function
where c is the inverse of the propagation speed.
2.2 Mathematical framework
is the linear continuous operator satisfying (the notation is defined in Definition A.2 of Appendix A) . Notice that most of the papers on this subject assume Ω infinite, hence requiring ${\tau}_{m}=\infty $. This raises difficult mathematical questions which we do not have to worry about, unlike [9, 15, 20–24].
We first recall the following proposition whose proof appears in [25].
 1.,$\mathbf{J}\in {\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})$
 2.
the external current ${\mathbf{I}}^{\mathit{ext}}\in {C}^{0}(\mathbf{R},\mathcal{F})$,
 3., ${sup}_{{\overline{\Omega}}^{2}}\tau \le {\tau}_{m}$.$\mathit{\tau}\in {C}^{0}({\overline{\Omega}}^{2},{\mathbf{R}}_{+}^{p\times p})$
Then for any$\varphi \in \mathcal{C}$, there exists a unique solution$\mathbf{V}\in {C}^{1}([0,\infty ),\mathcal{F})\cap {C}^{0}([{\tau}_{m},\infty ),\mathcal{F})$to (3).
Notice that this result gives existence on ${\mathbf{R}}_{+}$, finitetime explosion is impossible for this delayed differential equation. Nevertheless, a particular solution could grow indefinitely, we now prove that this cannot happen.
2.3 Boundedness of solutions
A valid model of neural networks should only feature bounded membrane potentials. We find a bounded attracting set in the spirit of our previous work with nondelayed neural mass equations. The proof is almost the same as in [19] but some care has to be taken because of the delays.
Theorem 2.2 All the trajectories of the equation (3) are ultimately bounded by the same constant R (see the proof) if$I\equiv {max}_{t\in {\mathbf{R}}^{+}}{\parallel {\mathbf{I}}^{\mathit{ext}}(t)\parallel}_{\mathcal{F}}<\infty $.
Thus, if , $f(t,{\mathbf{V}}_{t})\le \frac{l{R}^{2}}{2}\stackrel{\mathrm{def}}{=}\delta <0$.
Let us show that the open ball of $\mathcal{F}$ of center 0 and radius R, ${B}_{R}$, is stable under the dynamics of equation (3). We know that $\mathbf{V}(t)$ is defined for all $t\ge 0$s and that $f<0$ on $\partial {B}_{R}$, the boundary of ${B}_{R}$. We consider three cases for the initial condition ${\mathbf{V}}_{0}$.
If ${\parallel {\mathbf{V}}_{0}\parallel}_{\mathcal{C}}<R$ and set $T=sup\{t\forall s\in [0,t],\mathbf{V}(s)\in {\overline{B}}_{R}\}$. Suppose that $T\in \mathbf{R}$, then $\mathbf{V}(T)$ is defined and belongs to ${\overline{B}}_{R}$, the closure of ${B}_{R}$, because ${\overline{B}}_{R}$ is closed, in effect to $\partial {B}_{R}$. We also have $\frac{d}{dt}{\parallel \mathbf{V}\parallel}_{\mathcal{F}}^{2}{}_{t=T}=f(T,{\mathbf{V}}_{T})\le \delta <0$ because $\mathbf{V}(T)\in \partial {B}_{R}$. Thus we deduce that for $\epsilon >0$ and small enough, $\mathbf{V}(T+\epsilon )\in {\overline{B}}_{R}$ which contradicts the definition of T. Thus $T\notin \mathbf{R}$ and ${\overline{B}}_{R}$ is stable.
Because $f<0$ on $\partial {B}_{R}$, $\mathbf{V}(0)\in \partial {B}_{R}$ implies that $\forall t>0$, $\mathbf{V}(t)\in {B}_{R}$.
Finally we consider the case ${\mathbf{V}}_{0}\in \complement {\overline{B}}_{R}$. Suppose that $\forall t>0$, $\mathbf{V}(t)\notin {\overline{B}}_{R}$, then $\forall t>0$, $\frac{d}{dt}{\parallel \mathbf{V}\parallel}_{\mathcal{F}}^{2}\le 2\delta $, thus ${\parallel \mathbf{V}(t)\parallel}_{\mathcal{F}}$ is monotonically decreasing and reaches the value of R in finite time when $\mathbf{V}(t)$ reaches $\partial {B}_{R}$. This contradicts our assumption. Thus $\exists T>0\mid \mathbf{V}(T)\in {B}_{R}$. □
3 Stability results
When studying a dynamical system, a good starting point is to look for invariant sets. Theorem 2.2 provides such an invariant set but it is a very large one, not sufficient to convey a good understanding of the system. Other invariant sets (included in the previous one) are stationary points. Notice that delayed and nondelayed equations share exactly the same stationary solutions, also called persistent states. We can therefore make good use of the harvest of results that are available about these persistent states which we note ${\mathbf{V}}^{f}$. Note that in most papers dealing with persistent states, the authors compute one of them and are satisfied with the study of the local dynamics around this particular stationary solution. Very few authors (we are aware only of [19, 26]) address the problem of the computation of the whole set of persistent states. Despite these efforts they have yet been unable to get a complete grasp of the global dynamics. To summarize, in order to understand the impact of the propagation delays on the solutions of the neural field equations, it is necessary to know all their stationary solutions and the dynamics in the region where these stationary solutions lie. Unfortunately such knowledge is currently not available. Hence we must be content with studying the local dynamics around each persistent state (computed, for example, with the tools of [19]) with and without propagation delays. This is already, we think, a significant step forward toward understanding delayed neural field equations.
From now on we note ${\mathbf{V}}^{f}$ a persistent state of (3) and study its stability.
 1.
to derive a Lyapunov functional,
 2.
to use a fixed point approach,
 3.
to determine the spectrum of the infinitesimal generator associated to the linearized equation.
Previous results concerning stability bounds in delayed neural mass equations are ‘absolute’ results that do not involve the delays: they provide a sufficient condition, independent of the delays, for the stability of the fixed point (see [15, 20–22]). The bound they find is similar to our second bound in Proposition 3.13. They ‘proved’ it by showing that if the condition was satisfied, the eigenvalues of the infinitesimal generator of the semigroup of the linearized equation had negative real parts. This is not sufficient because a more complete analysis of the spectrum (for example, the essential part) is necessary as shown below in order to proof that the semigroup is exponentially bounded. In our case we prove this assertion in the case of a bounded cortex (see Section 3.1). To our knowledge it is still unknown whether this is true in the case of an infinite cortex.
These authors also provide a delaydependent sufficient condition to guarantee that no oscillatory instabilities can appear, that is, they give a condition that forbids the existence of solutions of the form ${e}^{i(\mathbf{k}\cdot \mathbf{r}+\omega t)}$. However, this result does not give any information regarding stability of the stationary solution.
We use the second method cited above, the fixed point method, to prove a more general result which takes into account the delay terms. We also use both the second and the third method above, the spectral method, to prove the delayindependent bound from [15, 20–22]. We then evaluate the conservativeness of these two sufficient conditions. Note that the delayindependent bound has been correctly derived in [25] using the first method, the Lyapunov method. It might be of interest to explore its potential to derive a delaydependent bound.
3.1 Principle of linear stability analysis via characteristic values
We derive the stability of the persistent state ${\mathbf{V}}^{f}$ (see [19]) for the equation (1) or equivalently (3) using the spectral properties of the infinitesimal generator. We prove that if the eigenvalues of the infinitesimal generator of the righthand side of (4) are in the left part of the complex plane, the stationary state $\mathbf{U}=0$ is asymptotically stable for equation (4). This result is difficult to prove because the spectrum (the main definitions for the spectrum of a linear operator are recalled in Appendix A) of the infinitesimal generator neither reduces to the point spectrum (set of eigenvalues of finite multiplicity) nor is contained in a cone of the complex plane C (such an operator is said to be sectorial). The ‘principle of linear stability’ is the fact that the linear stability of U is inherited by the state ${\mathbf{V}}^{f}$ for the nonlinear equations (1) or (3). This result is stated in the Corollaries 3.7 and 3.8.
Following [27–31], we note ${(\mathbf{T}(t))}_{t\ge 0}$ the strongly continuous semigroup of (4) on $\mathcal{C}$ (see Definition A.3 in Appendix A) and A its infinitesimal generator. By definition, if U is the solution of (4) we have ${\mathbf{U}}_{t}=\mathbf{T}(t)\varphi $. In order to prove the linear stability, we need to find a condition on the spectrum $\Sigma (\mathbf{A})$ of A which ensures that $\mathbf{T}(t)\to 0$ as $t\to \infty $.
Such a ‘principle’ of linear stability was derived in [29, 30]. Their assumptions implied that $\Sigma (\mathbf{A})$ was a pure point spectrum (it contained only eigenvalues) with the effect of simplifying the study of the linear stability because, in this case, one can link estimates of the semigroup T to the spectrum of A. This is not the case here (see Proposition 3.4).
We prove in Lemma 3.6 (see below) that ${(\mathbf{T}(t))}_{t\ge 0}$ is eventually norm continuous. Let us start by computing the spectrum of A.
3.1.1 Computation of the spectrum of A
In this section we use ${\mathbf{L}}_{1}$ for ${\tilde{\mathbf{L}}}_{1}$ for simplicity.
The spectrum $\Sigma (\mathbf{A})$ consists of those $\lambda \in \mathbf{C}$ such that the operator $\Delta (\lambda )$ of $\mathcal{L}(\mathcal{F})$ defined by $\Delta (\lambda )=\lambda \mathrm{Id}+{\mathbf{L}}_{0}\mathbf{J}(\lambda )$ is noninvertible. We use the following definition:
Definition 3.2 (Characteristic values (CV))
The characteristic values of A are the λs such that$\Delta (\lambda )$has a kernel which is not reduced to 0, that is, is not injective.
It is easy to see that the CV are the eigenvalues of A.
There are various ways to compute the spectrum of an operator in infinite dimensions. They are related to how the spectrum is partitioned (for example, continuous spectrum, point spectrum…). In the case of operators which are compact perturbations of the identity such as Fredholm operators, which is the case here, there is no continuous spectrum. Hence the most convenient way for us is to compute the point spectrum and the essential spectrum (see Appendix A). This is what we achieve next.
Remark 1 In finite dimension (that is, $dim\mathcal{F}<\infty $), the spectrum of A consists only of CV. We show that this is not the case here.
Notice that most papers dealing with delayed neural field equations only compute the CV and numerically assess the linear stability (see [9, 24, 33]).
We now show that we can link the spectral properties of A to the spectral properties of ${\mathbf{L}}_{\lambda}$. This is important since the latter operator is easier to handle because it acts on a Hilbert space. We start with the following lemma (see [34] for similar results in a different setting).
Lemma 3.3$\lambda \in {\Sigma}_{\mathit{ess}}(\mathbf{A})\iff \lambda \in {\Sigma}_{\mathit{ess}}({\mathbf{L}}_{\lambda})$.
Proof Let us define the following operator. If $\lambda \in \mathbf{C}$, we define ${\mathcal{T}}_{\lambda}\in \mathcal{L}(\mathcal{C},\mathcal{F})$ by ${\mathcal{T}}_{\lambda}(\varphi )=\varphi (0)+\mathbf{L}({\int}_{\cdot}^{0}{e}^{\lambda (\cdot s)}\varphi (s)\phantom{\rule{0.2em}{0ex}}ds)$$\varphi \in \mathcal{C}$. From [28], Lemma 34], ${\mathcal{T}}_{\lambda}$ is surjective and it is easy to check that $\varphi \in \mathcal{R}(\lambda \mathrm{Id}\mathbf{A})$ iif ${\mathcal{T}}_{\lambda}(\varphi )\in \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})$, see [28], Lemma 35]. Moreover $\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})$ is closed in $\mathcal{C}$ iff $\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})$ is closed in $\mathcal{F}$, see [28], Lemma 36].
Let us now prove the lemma. We already know that $\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})$ is closed in $\mathcal{C}$ if $\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})$ is closed in $\mathcal{F}$. Also, we have $\mathcal{N}(\lambda \mathrm{Id}\mathbf{A})=\{\theta \to {e}^{\theta \lambda}\mathbf{U},\mathbf{U}\in \mathcal{N}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})\}$, hence $dim\mathcal{N}(\lambda \mathrm{Id}\mathbf{A})<\infty $ iif $dim\mathcal{N}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})<\infty $. It remains to check that $codim\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})<\infty $ iif $codim\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})<\infty $.
Suppose that $codim\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})<\infty $. There exist ${\varphi}_{1},\dots ,{\varphi}_{N}\in \mathcal{C}$ such that $\mathcal{C}=Span({\varphi}_{i})+\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})$. Consider ${\mathbf{U}}_{i}\equiv {\mathcal{T}}_{\lambda}({\varphi}_{i})\in \mathcal{F}$. Because ${\mathcal{T}}_{\lambda}$ is surjective, for all $\mathbf{U}\in \mathcal{F}$, there exists $\psi \in \mathcal{C}$ satisfying $\mathbf{U}={\mathcal{T}}_{\lambda}(\psi )$. We write $\psi ={\sum}_{i=1}^{N}{x}_{i}{\varphi}_{i}+f$, $f\in \mathcal{R}(\lambda \mathrm{Id}\mathbf{A})$. Then $\mathbf{U}={\sum}_{i=1}^{N}{x}_{i}{\mathbf{U}}_{i}+{\mathcal{T}}_{\lambda}(f)$ where ${\mathcal{T}}_{\lambda}(f)\in \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})$, that is, $codim\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})<\infty $.
Suppose that $codim\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})<\infty $. There exist ${\mathbf{U}}_{1},\dots ,{\mathbf{U}}_{N}\in \mathcal{F}$ such that $\mathcal{F}=Span({\mathbf{U}}_{i})+\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})$. As ${\mathcal{T}}_{\lambda}$ is surjective for all $i=1,\dots ,N$ there exists ${\varphi}_{i}\in \mathcal{C}$ such that ${\mathbf{U}}_{i}={\mathcal{T}}_{\lambda}({\varphi}_{i})$. Now consider $\psi \in \mathcal{C}$. ${\mathcal{T}}_{\lambda}(\psi )$ can be written ${\mathcal{T}}_{\lambda}(\psi )={\sum}_{i=1}^{N}{x}_{i}{\mathbf{U}}_{i}+\tilde{\mathbf{U}}$ where $\tilde{\mathbf{U}}\in \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})$. But $\psi {\sum}_{i=1}^{N}{x}_{i}{\varphi}_{i}\in \mathcal{R}(\lambda \mathrm{Id}\mathbf{A})$ because ${\mathcal{T}}_{\lambda}(\psi {\sum}_{i=1}^{N}{x}_{i}{\varphi}_{i})=\tilde{\mathbf{U}}\in \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})$. It follows that $codim\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})<\infty $. □
Lemma 3.3 is the key to obtain $\Sigma (\mathbf{A})$. Note that it is true regardless of the form of L and could be applied to other types of delays in neural field equations. We now prove the important following proposition.
 1..${\Sigma}_{\mathit{ess}}(\mathbf{A})=\Sigma ({\mathbf{L}}_{0})$
 2.is at most countable.$\Sigma (\mathbf{A})$
 3..$\Sigma (\mathbf{A})=\Sigma ({\mathbf{L}}_{0})\cup CV$
 4.
For $\lambda \in \Sigma (\mathbf{A})\setminus \Sigma ({\mathbf{L}}_{0})$, the generalized eigenspace ${\bigcup}_{k}\mathcal{N}{(\lambda I\mathbf{A})}^{k}$ is finite dimensional and $\exists k\in \mathbb{N}$, $\mathcal{C}=\mathcal{N}({(\lambda I\mathbf{A})}^{k})\oplus \mathcal{R}({(\lambda I\mathbf{A})}^{k})$.
 1.. We apply [35], Theorem IV.5.26]. It shows that the essential spectrum does not change under compact perturbation. As $\mathbf{J}(\lambda )\in \mathcal{L}(\mathcal{F})$ is compact, we find ${\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0}+\mathbf{J}(\lambda ))={\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0})$.$\lambda \in {\Sigma}_{\mathit{ess}}(\mathbf{A})\iff \lambda \in {\Sigma}_{\mathit{ess}}({\mathbf{L}}_{\lambda})={\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0}+\mathbf{J}(\lambda ))$
Let us show that ${\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0})=\Sigma ({\mathbf{L}}_{0})$. The assertion ‘⊂’ is trivial. Now if $\lambda \in \Sigma ({\mathbf{L}}_{0})$, for example, $\lambda ={l}_{1}$, then $\lambda \mathrm{Id}+{\mathbf{L}}_{0}=diag(0,{l}_{1}+{l}_{2},\dots )$.
 2.
We apply [35], Theorem IV.5.33] stating (in its first part) that if ${\Sigma}_{\mathit{ess}}(\mathbf{A})$ is at most countable, so is $\Sigma (\mathbf{A})$.
 3.
We apply again [35], Theorem IV.5.33] stating that if ${\Sigma}_{\mathit{ess}}(\mathbf{A})$ is at most countable, any point in $\Sigma (\mathbf{A})\setminus {\Sigma}_{\mathit{ess}}(\mathbf{A})$ is an isolated eigenvalue with finite multiplicity.
 4.
Because ${\Sigma}_{\mathit{ess}}(\mathbf{A})\subset {\Sigma}_{\mathit{ess},\mathit{Arino}}(\mathbf{A})$, we can apply [28], Theorem 2] which precisely states this property. □
Last but not least, we can prove that the CVs are almost all, that is, except for possibly a finite number of them, located on the left part of the complex plane. This indicates that the unstable manifold is always finite dimensional for the models we are considering here.
Corollary 3.5$\mathit{Card}\Sigma (\mathbf{A})\cap \{\lambda \in \mathbb{C},\Re \lambda >l\}<\infty $where$l={min}_{i}{l}_{i}$.
Proof If $\lambda =\rho +i\omega \in \Sigma (\mathbf{A})$ and $\rho >l$, then λ is a CV, that is, $\mathcal{N}(\mathrm{Id}{(\lambda \mathrm{Id}+{\mathbf{L}}_{0})}^{1}\mathbf{J}(\lambda ))\ne \varnothing $ stating that $1\in {\Sigma}_{P}({(\lambda \mathrm{Id}+{\mathbf{L}}_{0})}^{1}\mathbf{J}(\lambda ))$ (${\Sigma}_{P}$ denotes the point spectrum).
But for λ big enough since is bounded.
Hence, for λ large enough $1\notin {\Sigma}_{P}({(\lambda \mathrm{Id}+{\mathbf{L}}_{0})}^{1}\mathbf{J}(\lambda ))$, which holds by the spectral radius inequality. This relationship states that the CVs λ satisfying $\Re \lambda >l$ are located in a bounded set of the right part of $\mathbb{C}$; given that the CV are isolated, there is a finite number of them. □
3.1.2 Stability results from the characteristic values
We start with a lemma stating regularity for ${(\mathbf{T}(t))}_{t\ge 0}$:
Lemma 3.6 The semigroup${(\mathbf{T}(t))}_{t\ge 0}$of (4) is norm continuous on$\mathcal{C}$for$t>{\tau}_{m}$.
Proof We first notice that ${\mathbf{L}}_{0}$ generates a norm continuous semigroup (in fact a group) $\mathbf{S}(t)={e}^{t{\mathbf{L}}_{0}}$ on $\mathcal{F}$ and that ${\tilde{\mathbf{L}}}_{1}$ is continuous from $\mathcal{C}$ to $\mathcal{F}$. The lemma follows directly from [27], Theorem VI.6.6]. □
Using the spectrum computed in Proposition 3.4, the previous lemma and the formula (5), we can state the asymptotic stability of the linear equation (4). Notice that because of Corollary 3.5, the supremum in (5) is in fact a max.
Corollary 3.7 (Linear stability)
Zero is asymptotically stable for (4) if and only if$max\Re {\Sigma}_{p}(\mathbf{A})<0$.
We conclude by showing that the computation of the characteristic values of A is enough to state the stability of the stationary solution ${\mathbf{V}}^{f}$.
Corollary 3.8 If$max\Re {\Sigma}_{p}(\mathbf{A})<0$, then the persistent solution${\mathbf{V}}^{f}$of (3) is asymptotically stable.
Proof Using $\mathbf{U}=\mathbf{V}{\mathbf{V}}^{f}$, we write (3) as $\dot{\mathbf{U}}(t)=\mathbf{L}{\mathbf{U}}_{t}+G({\mathbf{U}}_{t})$. The function G is ${C}^{2}$ and satisfies $G(0)=0$, $DG(0)=0$ and ${\parallel G({\mathbf{U}}_{t})\parallel}_{\mathcal{C}}=O({\parallel {\mathbf{U}}_{t}\parallel}_{\mathcal{C}}^{2})$. We next apply a variation of constant formula. In the case of delayed equations, this formula is difficult to handle because the semigroup T should act on noncontinuous functions as shown by the formula ${\mathbf{U}}_{t}=\mathbf{T}(t)\varphi +{\int}_{0}^{t}\mathbf{T}(ts)[{X}_{0}G({\mathbf{U}}_{s})]\phantom{\rule{0.2em}{0ex}}ds$, where ${X}_{0}(\theta )=0$ if $\theta <0$ and ${X}_{0}(0)=1$. Note that the function $\theta \to {X}_{0}(\theta )G({\mathbf{U}}_{s})$ is not continuous at $\theta =0$.
where ${\pi}_{2}$ is the projector on the second component.
Now we choose $\omega =max\Re {\Sigma}_{p}(\mathbf{A})/2>0$ and the spectral mapping theorem implies that there exists $M>0$ such that and . It follows that ${\parallel {\mathbf{U}}_{t}\parallel}_{\mathcal{C}}\le M{e}^{\omega t}({\parallel {U}_{0}\parallel}_{\mathcal{C}}+{\int}_{0}^{t}{e}^{\omega s}{\parallel G({\mathbf{U}}_{s})\parallel}_{\mathcal{F}}\phantom{\rule{0.2em}{0ex}}ds)$ and from Theorem 2.2, ${\parallel G({\mathbf{U}}_{t})\parallel}_{\mathcal{C}}=O(1)$, which yields ${\parallel {\mathbf{U}}_{t}\parallel}_{\mathcal{C}}=O({e}^{\omega t})$ and concludes the proof. □
Finally, we can use the CVs to derive a sufficient stability result.
Proposition 3.9 If${\parallel \mathbf{J}\cdot DS({\mathbf{V}}^{f})\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbf{R}}^{p\times p})}<{min}_{i}{l}_{i}$then${\mathbf{V}}^{f}$is asymptotically stable for (3).
Proof Suppose that a CV λ of positive real part exists, this gives a vector in the Kernel of $\Delta (\lambda )$. Using straightforward estimates, it implies that ${min}_{i}{l}_{i}\le {\parallel \mathbf{J}\cdot DS({\mathbf{V}}^{f})\parallel}_{\mathcal{F}}$, a contradiction. □
3.1.3 Generalization of the model
This indicates that the essential spectrum ${\Sigma}_{\mathit{ess}}(\mathcal{A})$ of $\mathcal{A}$ is equal to ${\bigcup}_{i}Root({P}_{i})$ which is located in the left side of the complex plane. Thus the point spectrum is enough to characterize the linear stability:
Proposition 3.10 If$max\Re {\Sigma}_{p}(\mathcal{A})<0$the persistent solution${\mathbf{V}}^{f}$of (6) is asymptotically stable.
Using the same proof as in [20], one can prove that $max\Re \Sigma (\mathcal{A})<0$ provided that ${\parallel \mathbf{J}\cdot DS({\mathbf{V}}^{f})\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbf{R}}^{p\times p})}<{min}_{k\in \mathbb{N},\omega \in \mathbb{R}}{P}_{k}(i\omega )$.
Proposition 3.11 If${\parallel \mathbf{J}\cdot DS({\mathbf{V}}^{f})\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbf{R}}^{p\times p})}<{min}_{k\in \mathbb{N},\omega \in \mathbb{R}}{P}_{k}(i\omega )$then${\mathbf{V}}^{f}$is asymptotically stable.
3.2 Principle of linear stability analysis via fixed point theory
The idea behind this method (see [37]) is to write (4) as an integral equation. This integral equation is then interpreted as a fixed point problem. We already know that this problem has a unique solution in ${\mathcal{C}}^{0}$. However, by looking at the definition of the (Lyapunov) stability, we can express the stability as the existence of a solution of the fixed point problem in a smaller space $\mathcal{S}\subset {\mathcal{C}}^{0}$. The existence of a solution in $\mathcal{S}$ gives the unique solution in ${\mathcal{C}}^{0}$. Hence, the method is to provide conditions for the fixed point problem to have a solution in $\mathcal{S}$; in the two cases presented below, we use the Picard fixed point theorem to obtain these conditions. Usually this method gives conditions on the averaged quantities arising in (4) whereas a Lyapunov method would give conditions on the sign of the same quantities. There is no method to be preferred, rather both of them should be applied to obtain the best bounds.
Note that the notation ${\mathit{\tau}}^{\beta}$ represents the matrix of elements $1/{\tau}_{ij}^{\beta}$.
Remark 2 For example, in the 2D onepopulation case for$\tau (\mathbf{r},\overline{\mathbf{r}})=c\parallel \mathbf{r}\overline{\mathbf{r}}\parallel $, we have$0\le \beta <1$.
Note the slight abuse of notation, namely ${(\tilde{\mathbf{J}}(\mathbf{r},\overline{\mathbf{r}}){\int}_{t\mathit{\tau}(\mathbf{r},\overline{\mathbf{r}})}^{t}ds\phantom{\rule{0.2em}{0ex}}\mathbf{U}(\overline{\mathbf{r}},s))}_{i}={\sum}_{j}{\tilde{\mathbf{J}}}_{ij}(\mathbf{r},\overline{\mathbf{r}}){\int}_{t{\mathit{\tau}}_{ij}(\mathbf{r},\overline{\mathbf{r}})}^{t}ds\phantom{\rule{0.2em}{0ex}}{\mathbf{U}}_{j}(\overline{\mathbf{r}},s)$.
Lemma B.3 in Appendix B.2 yields the upperbound ${\parallel \mathbf{Z}(t)\parallel}_{\mathcal{F}}\le {{\tau}_{m}}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}{sup}_{s\in [t{\tau}_{m},t]}{\parallel \mathbf{U}(s)\parallel}_{\mathcal{F}}$. This shows that ∀t, $\mathbf{Z}(t)\in \mathcal{F}$.
We have the following lemma.
Lemma 3.12 The formulation (9) is equivalent to (4).
which allows us to conclude. □
Using the two integral formulations of (4) we obtain sufficient conditions of stability, as stated in the following proposition:
 1.
 2.,${\parallel \tilde{\mathbf{J}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}<{min}_{i}{l}_{i}$
then${\mathbf{V}}^{f}$is asymptotically stable for (3).
Proof We start with the first condition.
We define ${\mathbf{P}}_{2}$ on ${\mathcal{S}}_{\varphi}$.
For all $\psi \in {\mathcal{S}}_{\varphi}$ we have ${\mathbf{P}}_{2}\psi \in \mathcal{B}$ and $({\mathbf{P}}_{2}\psi )(0)=\varphi (0)$. We want to show that ${\mathbf{P}}_{2}{\mathcal{S}}_{\varphi}\subset {\mathcal{S}}_{\varphi}$. We prove two properties.
1. ${\mathbf{P}}_{2}\psi $ tends to zero at infinity.
Choose $\psi \in {\mathcal{S}}_{\varphi}$.
Using Corollary B.3, we have $\mathbf{Z}(t)\to 0$ as $t\to \infty $.
From (9), it follows that ${\mathbf{P}}_{2}\psi \to 0$ when $t\to \infty $.
Since ${\mathbf{P}}_{2}\psi $ is continuous and has a limit when $t\to \infty $ it is bounded and therefore ${\mathbf{P}}_{2}:{\mathcal{S}}_{\varphi}\to {\mathcal{S}}_{\varphi}$.
2. ${\mathbf{P}}_{2}$ is contracting on ${\mathcal{S}}_{\varphi}$ .
We conclude from Picard theorem that the operator ${\mathbf{P}}_{2}$ has a unique fixed point in ${\mathcal{S}}_{\varphi}$.
where $\mathbf{U}(t,\varphi )$ is the solution of (4).
As ${\mathbf{U}}^{\varphi ,\epsilon}(t)\to 0$ in $\mathcal{F}$ implies ${\mathbf{U}}_{t}^{\varphi ,\epsilon}\to 0$ in $\mathcal{C}$, we have proved the asymptotic stability for the linearized equation.
The proof of the second property is straightforward. If 0 is asymptotically stable for (4) all the CV are negative and Corollary 3.8 indicates that ${\mathbf{V}}^{f}$ is asymptotically stable for (3).
The second condition says that ${\mathbf{P}}_{1}\psi ={e}^{{\mathbf{L}}_{0}t}\varphi (0)+{\int}_{0}^{t}{e}^{{\mathbf{L}}_{0}(ts)}({\tilde{\mathbf{L}}}_{1}\psi )(s)\phantom{\rule{0.2em}{0ex}}ds$ is a contraction because .
The asymptotic stability follows using the same arguments as in the case of ${\mathbf{P}}_{2}$. □
We next simplify the first condition of the previous proposition to make it more amenable to numerics.
Corollary 3.14 Suppose that$\forall t\ge 0$, $\epsilon >0$.
If there exist$\alpha <1$, $\beta >0$such that${\tau}_{m}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}(1+\frac{{M}_{\epsilon}}{\epsilon}{\parallel \tilde{\mathbf{J}}{\mathbf{L}}_{0}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})})\le \alpha $, then${\mathbf{V}}^{f}$is asymptotically stable.
Proof This corollary follows immediately from the following upperbound of the integral . Then if there exists $\alpha <1$, $\beta >0$ such that ${\tau}_{m}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}(1+\frac{{M}_{\epsilon}}{\epsilon}{\parallel \tilde{\mathbf{J}}{\mathbf{L}}_{0}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})})\le \alpha $, it implies that condition 1 in Proposition 3.13 is satisfied, from which the asymptotic stability of ${\mathbf{V}}^{f}$ follows. □
Notice that $\epsilon >0$ is equivalent to $max\Re \Sigma ({\mathbf{L}}_{0}+\tilde{\mathbf{J}})<0$. The previous corollary is useful in at least the following cases:

If $\tilde{\mathbf{J}}{\mathbf{L}}_{0}$ is diagonalizable, with associated eigenvalues/eigenvectors: ${\lambda}_{n}\in \mathbb{C}$, ${e}_{n}\in \mathcal{F}$, then $\tilde{\mathbf{J}}{\mathbf{L}}_{0}={\sum}_{n}{e}^{{\lambda}_{n}t}{e}_{n}\otimes {e}_{n}$ and .

If ${\mathbf{L}}_{0}={l}_{0}\mathrm{Id}$ and the range of $\tilde{\mathbf{J}}$ is finite dimensional: $\tilde{\mathbf{J}}(\mathbf{r},{\mathbf{r}}^{\prime})={\sum}_{k,l=1}^{N}{J}_{kl}{e}_{k}(\mathbf{r})\otimes {e}_{l}({\mathbf{r}}^{\prime})$ where ${({e}_{k})}_{k\in \mathbb{N}}$ is an orthonormal basis of $\mathcal{F}$, then ${e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})t}={e}^{{l}_{0}\cdot \mathrm{Id}\cdot t}{e}^{\tilde{\mathbf{J}}t}$ and . Let us write $J={({J}_{kl})}_{k,l=1,\dots ,N}$ the matrix associated to $\tilde{\mathbf{J}}$ (see above). Then ${e}^{\tilde{\mathbf{J}}t}$ is also a compact operator with finite range and . Finally, it gives .

If $\tilde{\mathbf{J}}{\mathbf{L}}_{0}$ is selfadjoint, then it is diagonalizable and we can chose $\epsilon =max\Re \Sigma ({\mathbf{L}}_{0}+\tilde{\mathbf{J}})$, ${M}_{\epsilon}=1$.
Suppose that${\mathcal{L}}_{0}$is diagonalizable then where${\parallel \mathcal{U}\parallel}_{{(\mathcal{F})}^{{d}_{s}}}\equiv {\sum}_{k=1}^{{d}_{s}}{\parallel {\mathcal{U}}_{k}\parallel}_{\mathcal{F}}$and$min\Re \Sigma ({\mathcal{L}}_{0})={max}_{k}\Re Root({P}_{k})$. Also notice that$\tilde{\mathcal{J}}={\tilde{\mathcal{L}}}_{1}{}_{\mathcal{F}}$, . Then using the same functionals as in the proof of Proposition 3.13, we can find two bounds for the stability of a stationary state${\mathbf{V}}^{f}$:

Suppose that$max\Re \Sigma (\tilde{\mathcal{J}}{\mathcal{L}}_{0})<0$, that is, ${\mathbf{V}}^{f}$is stable for the nondelayed equation where${(\tilde{\mathcal{J}})}_{k,l=1,\dots ,{d}_{s}}={({\delta}_{k={d}_{s},l=1}\tilde{\mathbf{J}})}_{k,l=1,\dots ,{d}_{s}}$. If there exist$\alpha <1$, $\beta >0$such that .

.${\parallel \tilde{\mathbf{J}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}<{max}_{k}\Re Root({P}_{k})$
To conclude, we have found an easytocompute formula for the stability of the persistent state ${\mathbf{V}}^{f}$. It can indeed be cumbersome to compute the CVs of neural field equations for different parameters in order to find the region of stability whereas the evaluation of the conditions in Corollary 3.14 is very easy numerically.
The conditions in Proposition 3.13 and Corollary 3.14 define a set of parameters for which ${\mathbf{V}}^{f}$ is stable. Notice that these conditions are only sufficient conditions: if they are violated, ${\mathbf{V}}^{f}$ may still remain stable. In order to find out whether the persistent state is destabilized we have to look at the characteristic values. Condition 1 in Proposition 3.13 indicates that if ${\mathbf{V}}^{f}$ is a stable point for the nondelayed equation (see [18]) it is also stable for the delayedequation. Thus, according to this condition, it is not possible to destabilize a stable persistent state by the introduction of small delays, which is indeed meaningful from the biological viewpoint. Moreover this condition gives an indication of the amount of delay one can introduce without changing the stability.
Condition 2 is not very useful as it is independent of the delays: no matter what they are, the stable point ${\mathbf{V}}^{f}$ will remain stable. Also, if this condition is satisfied there is a unique stationary solution (see [18]) and the dynamics is trivial, that is, converging to the unique stationary point.
3.3 Summary of the different bounds and conclusion
The next proposition summarizes the results we have obtained in Proposition 3.13 and Corollary 3.14 for the stability of a stationary solution.
 1.
There exist $\epsilon >0$ such that and $\alpha <1$, $\beta >0$ such that ${\tau}_{m}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{L}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}(1+\frac{{M}_{\epsilon}}{\epsilon}{\parallel \tilde{\mathbf{J}}{\mathbf{L}}_{0}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})})\le \alpha $,
 2.${\parallel \tilde{\mathbf{J}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}<{min}_{i}{l}_{i}$
then${\mathbf{V}}^{f}$is asymptotically stable for (3).
The only general results known so far for the stability of the stationary solutions are those of Atay and Hutt (see, for example, [20]): they found a bound similar to condition 2 in Proposition 3.15 by using the CVs, but no proof of stability was given. Their condition involves the ${\mathrm{L}}^{1}$norm of the connectivity function J and it was derived using the CVs in the same way as we did in the previous section. Thus our contribution with respect to condition 2 is that, once it is satisfied, the stationary solution is asymptotically stable: up until now this was numerically inferred on the basis of the CVs. We have proved it in two ways, first by using the CVs, and second by using the fixed point method which has the advantage of making the proof essentially trivial.
Condition 1 is of interest, because it allows one to find the minimal propagation delay that does not destabilize. Notice that this bound, though very easy to compute, overestimates the minimal speed. As mentioned above, the bounds in condition 1 are sufficient conditions for the stability of the stationary state ${\mathbf{V}}^{f}$. In order to evaluate the conservativeness of these bounds, we need to compare them to the stability predicted by the CVs. This is done in the next section.
4 Numerical application: neural fields on a ring
In order to evaluate the conservativeness of the bounds derived above we compute the CVs in a numerical example. This can be done in two ways:

Solve numerically the nonlinear equation satisfied by the CVs. This is possible when one has an explicit expression for the eigenvectors and periodic boundary conditions. It is the method used in [9].

Discretize the history space $\mathcal{C}$ in order to obtain a matrix version ${\mathbf{A}}_{N}$ of the linear operator A: the CVs are approximated by the eigenvalues of ${\mathbf{A}}_{N}$. Following the scheme of [36], it can be shown that the convergence of the eigenvalues of ${\mathbf{A}}_{N}$ to the CVs is in $O(\frac{1}{{N}^{N}})$ for a suitable discretization of $\mathcal{C}$. One drawback of this method is the size of ${\mathbf{A}}_{N}$ which can be very large in the case of several neuron populations in a twodimensional cortex. A recent improvement (see [38]), based on a clever factorization of ${\mathbf{A}}_{N}$, allows a much faster computation of the CVs: this is the scheme we have been using.
The Matlab program used to compute the righthand side of (1) uses a Cpp code that can be run on multiprocessors (with the OpenMP library) to speed up computations. It uses a trapezoidal rule to compute the integral. The time stepper dde23 of Matlab is also used.
In order to make the computation of the eigenvectors very straightforward, we study a network on a ring, but notice that all the tools (analytical/numerical) presented here also apply to a generic cortex. We reduce our study to scalar neural fields $\Omega \subset \mathbb{R}$ and one neuronal population, $p=1$. With this in mind the connectivity is chosen to be homogeneous$J(x,y)=J(xy)$ with J even. To respect topology, we assume the same for the propagation delay function $\tau (x,y)$.
where the sigmoid ${S}_{0}$ satisfies ${S}_{0}(0)=0$.
Remember that (13) has a Lyapunov functional when $c=0$ and that all trajectories are bounded. The trajectories of the nondelayed form of (13) are heteroclinic orbits and no nonconstant periodic orbit is possible.
where $\stackrel{\u02c6}{J}$ is the Fourier Transform of J and ${s}_{1}\equiv {S}_{0}^{\prime}(0)$. This nonlinear scalar equation is solved with the Matlab Toolbox TraceDDE (see [36]). Recall that the eigenvectors of A are given by the functions $\theta \to {e}^{\lambda \theta}cos(nx)$$\theta \to {e}^{\lambda \theta}sin(nx)\in \mathcal{C}$ where λ is a solution of (14). A bifurcation point is a pair $(c,\sigma )$ for which equations (14) have a solution with zero real part. Bifurcations are important because they signal a change in stability, a set of parameters ensuring stability is enclosed (if bounded) by bifurcation curves. Notice that if ${\sigma}_{0}$ is a bifurcation point in the case $c=0$, it remains a bifurcation point for the delayed system ∀c, hence ∀c$\sigma ={\sigma}_{0}$$0\in \Sigma (\mathbf{A})$. This is why there is a bifurcation line $\sigma ={\sigma}_{0}$ in the bifurcation diagrams that are shown later.
The first bound gives the minimal velocity $1/c$ below which the stationary state might be unstable, in this case, even for smaller speed, the state is stable as one can see from the CV boundary. Notice that in the parameter domain defined by the 2 conditions bound.1. and bound.2., the dynamic is very simple: it is characterized by a unique and asymptotically stable stationary state, ${V}^{f}=0$.
Notice that the graph of the CVs shown in the righthand part of Figure 2 features some interesting points, for example, the FoldHopf point at the intersection of the Pitchfork line and the Hopf curve. It is also possible that the multiplicity of the 0 eigenvalue could change on the Pitchfork line (P) to yield a BogdanovTakens point.
These numerical simulations reveal that the Lyapunov function derived in [39] is likely to be incorrect. Indeed, if such a function existed, as its value decreases along trajectories, it must be constant on any periodic orbit which is not possible. However the third plot in Figure 3 strongly suggests that we have found an oscillatory trajectory produced by a Hopf bifurcation (which we did not prove mathematically): this oscillatory trajectory converges to a periodic orbit which contradicts the existence of a Lyapunov functional such as the one proposed in [39].
Let us comment on the tightness of the delaydependent bound: as shown in Proposition 3.13, this bound involves the maximum delay value ${\tau}_{m}$ and the norm ${\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}$, hence the specific shape of the delay function, that is, $\tau (\mathbf{r},\overline{\mathbf{r}})=c{\parallel \mathbf{r}\overline{\mathbf{r}}\parallel}_{2}$, is not completely taken into account in the bound. We can imagine many different delay functions with the same values for ${\tau}_{m}$ and ${\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}$ that will cause possibly large changes to the dynamical portrait. For example, in the previous numerical application the singularity $\sigma ={\sigma}_{0}$, corresponding to the fact that $0\in {\Sigma}_{p}(\mathbf{A})$, is independent of the details of the shape of the delay function: however for specific delay functions, the multiplicity of this 0eigenvalue could change as in the BogdanovTakens bifurcation which involves changes in the dynamical portrait compared to the pitchfork bifurcation. Similarly, an additional purely imaginary eigenvalue could emerge (as for $c\approx 3.7$ in the numerical example) leading to a FoldHopf bifurcation. These instabilities depend on the expression of the delay function (and the connectivity function as well). These reasons explain why the bound in Proposition 3.13 is not very tight.
This suggests another way to attack the problem of the stability of fixed points: one could look for connectivity functions $\tilde{\mathbf{J}}$ which have the following property: for all delay function τ, the linearized equation (4) does not possess ‘unstable solutions’, that is, for all delay function τ$\Re {\Sigma}_{p}(\mathbf{A})<0$. In the literature (see [40, 41]), this is termed as the alldelay stability or the delayindependent stability. These remain questions for future work.
5 Conclusion
We have developed a theoretical framework for the study of neural field equations with propagation delays. This has allowed us to prove the existence, uniqueness, and the boundedness of the solutions to these equations under fairly general hypotheses.
We have then studied the stability of the stationary solutions of these equations. We have proved that the CVs are sufficient to characterize the linear stability of the stationary states. This was done using the semigroups theory (see [27]).
By formulating the stability of the stationary solutions as a fixed point problem we have found delaydependent sufficient conditions. These conditions involve all the parameters in the delayed neural field equations, the connectivity function, the nonlinear gain and the delay function. Albeit seemingly very conservative they are useful in order to avoid the numerically intensive computation of the CV.
From the numerical viewpoint we have used two algorithms [36, 38] to compute the eigenvalues of the linearized problem in order to evaluate the conservativeness of our conditions. A potential application is the study of the bifurcations of the delayed neural field equations.
By providing easytocompute sufficient conditions to quantify the impact of the delays on neural field equations we hope that our work will improve the study of models of cortical areas in which the propagation delays have so far been somewhat been neglected due to a partial lack of theory.
Appendix A: Operators and their spectra
We recall and gather in this appendix a number of definitions, results and hypotheses that are used in the body of the article to make it more selfsufficient.
Definition A.1 An operator$T\in \mathcal{L}(E,F)$, E, F being Banach spaces, is closed if its graph is closed in the direct sum$E\oplus F$.
Definition A.3 A semigroup${(\mathbf{T}(t))}_{t\ge 0}$on a Banach space E is strongly continuous if$\forall x\in E$, $t\to T(t)x$is continuous from${\mathbb{R}}_{+}$to E.
Definition A.4 A semigroup${(\mathbf{T}(t))}_{t\ge 0}$on a Banach space E is norm continuous if$t\to T(t)$is continuous from${\mathbb{R}}_{+}$to$L(E)$. It is said eventually norm continuous if it is norm continuous for$t>{t}_{0}\ge 0$.
Definition A.5 A closed operator$T\in \mathcal{L}(E)$of a Banach space E is Fredholm if$dim\mathcal{N}(T)$and$codim\mathcal{R}(T)$are finite and$\mathcal{R}(T)$is closed in E.
Definition A.6 A closed operator$T\in \mathcal{L}(E)$of a Banach space E is semiFredholm if$dim\mathcal{N}(T)$or$codim\mathcal{R}(T)$is finite and$\mathcal{R}(T)$is closed in E.
Definition A.7 If$T\in \mathcal{L}(E)$is a closed operator of a Banach space E the essential spectrum${\Sigma}_{\mathit{ess}}(T)$is the set of λs in$\mathbb{C}$such that$\lambda \mathrm{Id}T$is not semiFredholm, that is, either$\mathcal{R}(\lambda \mathrm{Id}T)$is not closed or$\mathcal{R}(\lambda \mathrm{Id}T)$is closed but$dim\mathcal{N}(\lambda \mathrm{Id}T)=codim\mathcal{R}(\lambda \mathrm{Id}T)=\infty $.
Appendix B: The Cauchy problem
B.1 Boundedness of solutions
We prove Lemma B.2 which is used in the proof of the boundedness of the solutions to the delayed neural field equations (1) or (3).
Lemma B.1 We have${\mathbf{L}}_{1}\in \mathcal{L}(\mathcal{C},\mathcal{F})$and .
Proof

We first check that ${\mathbf{L}}_{1}$ is well defined: if $\psi \in \mathcal{C}$ then ψ is measurable (it is Ωmeasurable by definition and $[0,\tau ]$measurable by continuity) on $\Omega \times [{\tau}_{m},0]$ so that the integral in the definition of ${\mathbf{L}}_{1}$ is meaningful. As τ is continuous, it follows that ${\psi}^{d}:(\mathbf{r},\overline{\mathbf{r}})\to \psi (\overline{\mathbf{r}},\mathit{\tau}(\mathbf{r},\overline{\mathbf{r}}))$ is measurable on Ω^{2}. Furthermore ${({\psi}^{d})}^{2}\in {\mathbf{L}}^{1}({\Omega}^{2},{\mathbb{R}}^{p\times p})$.

We now show that $\mathbf{J}\cdot {\psi}^{d}\in \mathcal{F}$. We have for $\psi \in \mathcal{C}$, ${\parallel {\mathbf{L}}_{1}\psi \parallel}_{\mathcal{F}}^{2}={\int}_{\Omega}d\mathbf{r}\phantom{\rule{0.2em}{0ex}}{\sum}_{i}{({\sum}_{j}{\int}_{\Omega}d\overline{\mathbf{r}}\phantom{\rule{0.2em}{0ex}}{\mathbf{J}}_{ij}(\mathbf{r},\overline{\mathbf{r}}){\psi}_{j}^{d}(\mathbf{r},\overline{\mathbf{r}}))}^{2}$. With CauchySchwartz:$\begin{array}{r}\sum _{j}{\int}_{\Omega}d\overline{\mathbf{r}}\phantom{\rule{0.2em}{0ex}}{\mathbf{J}}_{ij}(\mathbf{r},\overline{\mathbf{r}}){\psi}_{j}^{d}(\mathbf{r},\overline{\mathbf{r}})\\ \phantom{\rule{1em}{0ex}}\le \sum _{j}{\int}_{\Omega}d\overline{\mathbf{r}}{\mathbf{J}}_{ij}(\mathbf{r},\overline{\mathbf{r}}){\psi}_{j}^{d}(\mathbf{r},\overline{\mathbf{r}})\\ \phantom{\rule{1em}{0ex}}\le \sum _{j}\sqrt{{\int}_{\Omega}d\overline{\mathbf{r}}\phantom{\rule{0.2em}{0ex}}{\mathbf{J}}_{ij}{(\mathbf{r},\overline{\mathbf{r}})}^{2}}\sqrt{{\int}_{\Omega}d\overline{\mathbf{r}}\phantom{\rule{0.2em}{0ex}}{\psi}_{j}^{d}{(\mathbf{r},\overline{\mathbf{r}})}^{2}}\\ \phantom{\rule{1em}{0ex}}\le \sqrt{\sum _{j}{\int}_{\Omega}d\overline{\mathbf{r}}\phantom{\rule{0.2em}{0ex}}{\mathbf{J}}_{ij}{(\mathbf{r},\overline{\mathbf{r}})}^{2}}\sqrt{\sum _{j}{\int}_{\Omega}d\overline{\mathbf{r}}\phantom{\rule{0.2em}{0ex}}{\psi}_{j}^{d}{(\mathbf{r},\overline{\mathbf{r}})}^{2}}.\end{array}$(15)
and ${\mathbf{L}}_{1}$ is continuous.
□
Lemma B.2 We have${\u3008{\mathbf{L}}_{1}\mathbf{S}({\mathbf{V}}_{t}),\mathbf{V}(t)\u3009}_{\mathcal{F}}\le \sqrt{p\Omega }{\parallel \mathbf{J}\parallel}_{\mathcal{F}}{\parallel \mathbf{V}(t)\parallel}_{\mathcal{F}}$.
Proof By the CauchySchwarz inequality and Lemma B.1: ${\u3008{\mathbf{L}}_{1}\mathbf{S}({\mathbf{V}}_{t}),\mathbf{V}(t)\u3009}_{\mathcal{F}}\le {\parallel {\mathbf{L}}_{1}\mathbf{S}({\mathbf{V}}_{t})\parallel}_{\mathcal{F}}{\parallel \mathbf{V}(t)\parallel}_{\mathcal{F}}\le \sqrt{p\Omega }{\parallel \mathbf{J}\parallel}_{\mathcal{F}}{\parallel \mathbf{V}(t)\parallel}_{\mathcal{F}}$ because S is bounded by 1. □
B.2 Stability
In this section we prove Lemma B.3 which is central in establishing the first sufficient condition in Proposition 3.13.
and allows us to conclude. □
Declarations
Acknowledgements
We wish to thank Elias Jarlebringin who provided his program for computing the CV.
This work was partially supported by the ERC grant 227747  NERVI and the EC IP project #015879  FACETS.
Authors’ Affiliations
References
 Wilson H, Cowan J: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biol. Cybern. 1973,13(2):55–80. 10.1007/BF00288786Google Scholar
 Amari SI: Dynamics of pattern formation in lateralinhibition type neural fields. Biol. Cybern. 1977,27(2):77–87. 10.1007/BF00337259MathSciNetView ArticleGoogle Scholar
 Curtu R, Ermentrout B: Pattern formation in a network of excitatory and inhibitory cells with adaptation. SIAM J. Appl. Dyn. Syst. 2004, 3: 191. 10.1137/030600503MathSciNetView ArticleGoogle Scholar
 Kilpatrick Z, Bressloff P: Effects of synaptic depression and adaptation on spatiotemporal dynamics of an excitatory neuronal network. Physica D 2010,239(9):547–560. 10.1016/j.physd.2009.06.003MathSciNetView ArticleGoogle Scholar
 BenYishai R, BarOr R, Sompolinsky H: Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA 1995,92(9):3844–3848. 10.1073/pnas.92.9.3844View ArticleGoogle Scholar
 Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M: Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philos. Trans. R. Soc. Lond. B, Biol. Sci. 2001,306(1407):299–330.View ArticleGoogle Scholar
 Coombes S, Laing C: Delays in activity based neural networks. Philos. Trans. R. Soc. Lond. A 2009, 367: 1117–1129. 10.1098/rsta.2008.0256MathSciNetView ArticleGoogle Scholar
 Roxin A, Brunel N, Hansel D: Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks. Phys. Rev. Lett. 2005.,94(23):Google Scholar
 Venkov N, Coombes S, Matthews P: Dynamic instabilities in scalar neural field equations with spacedependent delays. Physica D 2007, 232: 1–15. 10.1016/j.physd.2007.04.011MathSciNetView ArticleGoogle Scholar
 Jirsa V, Kelso J: Spatiotemporal pattern formation in neural systems with heterogeneous connection topologies. Phys. Rev. E 2000,62(6):8462–8465. 10.1103/PhysRevE.62.8462View ArticleGoogle Scholar
 Wu J: Symmetric functional differential equations and neural networks with memory. Trans. Am. Math. Soc. 1998,350(12):4799–4838. 10.1090/S0002994798020832View ArticleGoogle Scholar
 Bélair, J., Campbell, S., Van Den Driessche, P.: Frustration, stability, and delayinduced oscillations in a neural network model. SIAM J. Appl. Math. 245–255 (1996) Bélair, J., Campbell, S., Van Den Driessche, P.: Frustration, stability, and delayinduced oscillations in a neural network model. SIAM J. Appl. Math. 245–255 (1996)Google Scholar
 Bélair, J., Campbell, S.: Stability and bifurcations of equilibria in a multipledelayed differential equation. SIAM J. Appl. Math. 1402–1424 (1994) Bélair, J., Campbell, S.: Stability and bifurcations of equilibria in a multipledelayed differential equation. SIAM J. Appl. Math. 1402–1424 (1994)Google Scholar
 Campbell, S., Ruan, S., Wolkowicz, G., Wu, J.: Stability and bifurcation of a simple neural network with multiple time delays. Differential Equations with Application to Biology 65–79 (1999) Campbell, S., Ruan, S., Wolkowicz, G., Wu, J.: Stability and bifurcation of a simple neural network with multiple time delays. Differential Equations with Application to Biology 65–79 (1999)Google Scholar
 Atay FM, Hutt A: Neural fields with distributed transmission speeds and longrange feedback delays. SIAM J. Appl. Dyn. Syst. 2006,5(4):670–698. 10.1137/050629367MathSciNetView ArticleGoogle Scholar
 Budd J, Kovács K, Ferecskó A, Buzás P, Eysel U, Kisvárday Z: Neocortical axon arbors tradeoff material and conduction delay conservation. PLoS Comput. Biol. 2010,6(3):e1000711. 10.1371/journal.pcbi.1000711View ArticleGoogle Scholar
 Faugeras O, Grimbert F, Slotine JJ: Abolute stability and complete synchronization in a class of neural fields models. SIAM J. Appl. Math. 2008, 61: 205–250.MathSciNetView ArticleGoogle Scholar
 Faugeras O, Veltz R, Grimbert F: Persistent neural states: stationary localized activity patterns in nonlinear continuous npopulation, qdimensional neural networks. Neural Comput. 2009, 21: 147–187. 10.1162/neco.2009.1207660MathSciNetView ArticleGoogle Scholar
 Veltz R, Faugeras O: Local/global analysis of the stationary solutions of some neural field equations. SIAM J. Appl. Dyn. Syst. 2010,9(3):954–998. http://link.aip.org/link/?SJA/9/954/1 http://link.aip.org/link/?SJA/9/954/1 10.1137/090773611MathSciNetView ArticleGoogle Scholar
 Atay FM, Hutt A: Stability and bifurcations in neural fields with finite propagation speed and general connectivity. SIAM J. Appl. Math. 2005,65(2):644–666.MathSciNetView ArticleGoogle Scholar
 Hutt A: Local excitationlateral inhibition interaction yields oscillatory instabilities in nonlocally interacting systems involving finite propagation delays. Phys. Lett. A 2008, 372: 541–546. 10.1016/j.physleta.2007.08.018View ArticleGoogle Scholar
 Hutt A, Atay F: Effects of distributed transmission speeds on propagating activity in neural populations. Phys. Rev. E 2006,73(021906):1–5.MathSciNetGoogle Scholar
 Coombes S, Venkov N, Shiau L, Bojak I, Liley D, Laing C: Modeling electrocortical activity through local approximations of integral neural field equations. Phys. Rev. E 2007,76(5):51901.MathSciNetView ArticleGoogle Scholar
 Bressloff P, Kilpatrick Z: Nonlocal GinzburgLandau equation for cortical pattern formation. Phys. Rev. E 2008,78(4):1–16. 41916 41916MathSciNetView ArticleGoogle Scholar
 Faye G, Faugeras O: Some theoretical and numerical results for delayed neural field equations. Physica D 2010,239(9):561–578. 10.1016/j.physd.2010.01.010MathSciNetView ArticleGoogle Scholar
 Ermentrout, G., Cowan, J.: Large scale spatially organized activity in neural nets. SIAM J. Appl. Math. 1–21 (1980) Ermentrout, G., Cowan, J.: Large scale spatially organized activity in neural nets. SIAM J. Appl. Math. 1–21 (1980)Google Scholar
 Engel, K., Nagel, R.: OneParameter Semigroups for Linear Evolution Equations, vol. 63. Springer (2001) Engel, K., Nagel, R.: OneParameter Semigroups for Linear Evolution Equations, vol. 63. Springer (2001)Google Scholar
 Arino, O., Hbid, M., Dads, E.: Delay Differential Equations and Applications. Springer (2006) Arino, O., Hbid, M., Dads, E.: Delay Differential Equations and Applications. Springer (2006)Google Scholar
 Hale, J., Lunel, S.: Introduction to Functional Differential Equations. Springer Verlag (1993) Hale, J., Lunel, S.: Introduction to Functional Differential Equations. Springer Verlag (1993)Google Scholar
 Wu, J.: Theory and Applications of Partial Functional Differential Equations. Springer (1996) Wu, J.: Theory and Applications of Partial Functional Differential Equations. Springer (1996)Google Scholar
 Diekmann, O.: Delay Equations: Functional, Complex, and Nonlinear Analysis. Springer (1995) Diekmann, O.: Delay Equations: Functional, Complex, and Nonlinear Analysis. Springer (1995)Google Scholar
 Yosida, K.: Functional Analysis. Classics in Mathematics (1995). Reprint of the sixth (1980) edition Yosida, K.: Functional Analysis. Classics in Mathematics (1995). Reprint of the sixth (1980) editionGoogle Scholar
 Hutt, A.: Finite propagation speeds in spatially extended systems. In: Complex TimeDelay Systems: Theory and Applications, p. 151 (2009) Hutt, A.: Finite propagation speeds in spatially extended systems. In: Complex TimeDelay Systems: Theory and Applications, p. 151 (2009)Google Scholar
 Bátkai, A., Piazzera, S.: Semigroups for Delay Equations. AK Peters, Ltd. (2005) Bátkai, A., Piazzera, S.: Semigroups for Delay Equations. AK Peters, Ltd. (2005)Google Scholar
 Kato, T.: Perturbation Theory for Linear Operators. Springer (1995) Kato, T.: Perturbation Theory for Linear Operators. Springer (1995)Google Scholar
 Breda, D., Maset, S., Vermiglio, R.: TRACEDDE: a tool for robust analysis and characteristic equations for delay differential equations. In: Topics in Time Delay Systems, pp. 145–155 (2009) Breda, D., Maset, S., Vermiglio, R.: TRACEDDE: a tool for robust analysis and characteristic equations for delay differential equations. In: Topics in Time Delay Systems, pp. 145–155 (2009)Google Scholar
 Burton T: Stability by Fixed Point Theory for Functional Differential Equations. Dover Publications, Mineola, NY; 2006.Google Scholar
 Jarlebring, E., Meerbergen, K., Michiels, W.: An Arnoldi like method for the delay eigenvalue problem (2010) Jarlebring, E., Meerbergen, K., Michiels, W.: An Arnoldi like method for the delay eigenvalue problem (2010)Google Scholar
 Enculescu M, Bestehorn M: Liapunov functional for a delayed integrodifferential equation model of a neural field. Europhys. Lett. 2007, 77: 68007. 10.1209/02955075/77/68007View ArticleGoogle Scholar
 Chen, J., Latchman, H.: Asymptotic stability independent of delays: simple necessary and sufficient conditions. In: Proceedings of the American Control Conference (1994) Chen, J., Latchman, H.: Asymptotic stability independent of delays: simple necessary and sufficient conditions. In: Proceedings of the American Control Conference (1994)Google Scholar
 Chen J, Xu D, Shafai B: On sufficient conditions for stability independent of delay. IEEE Trans. Autom. Control 1995,40(9):1675–1680. 10.1109/9.412644MathSciNetView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.