When studying a dynamical system, a good starting point is to look for invariant sets. Theorem 2.2 provides such an invariant set but it is a very large one, not sufficient to convey a good understanding of the system. Other invariant sets (included in the previous one) are stationary points. Notice that delayed and nondelayed equations share exactly the same stationary solutions, also called persistent states. We can therefore make good use of the harvest of results that are available about these persistent states which we note {\mathbf{V}}^{f}. Note that in most papers dealing with persistent states, the authors compute one of them and are satisfied with the study of the local dynamics around this particular stationary solution. Very few authors (we are aware only of [19, 26]) address the problem of the computation of the whole set of persistent states. Despite these efforts they have yet been unable to get a complete grasp of the global dynamics. To summarize, in order to understand the impact of the propagation delays on the solutions of the neural field equations, it is necessary to know all their stationary solutions and the dynamics in the region where these stationary solutions lie. Unfortunately such knowledge is currently not available. Hence we must be content with studying the local dynamics around each persistent state (computed, for example, with the tools of [19]) with and without propagation delays. This is already, we think, a significant step forward toward understanding delayed neural field equations.
From now on we note {\mathbf{V}}^{f} a persistent state of (3) and study its stability.
We can identify at least three ways to do this:

1.
to derive a Lyapunov functional,

2.
to use a fixed point approach,

3.
to determine the spectrum of the infinitesimal generator associated to the linearized equation.
Previous results concerning stability bounds in delayed neural mass equations are ‘absolute’ results that do not involve the delays: they provide a sufficient condition, independent of the delays, for the stability of the fixed point (see [15, 20–22]). The bound they find is similar to our second bound in Proposition 3.13. They ‘proved’ it by showing that if the condition was satisfied, the eigenvalues of the infinitesimal generator of the semigroup of the linearized equation had negative real parts. This is not sufficient because a more complete analysis of the spectrum (for example, the essential part) is necessary as shown below in order to proof that the semigroup is exponentially bounded. In our case we prove this assertion in the case of a bounded cortex (see Section 3.1). To our knowledge it is still unknown whether this is true in the case of an infinite cortex.
These authors also provide a delaydependent sufficient condition to guarantee that no oscillatory instabilities can appear, that is, they give a condition that forbids the existence of solutions of the form {e}^{i(\mathbf{k}\cdot \mathbf{r}+\omega t)}. However, this result does not give any information regarding stability of the stationary solution.
We use the second method cited above, the fixed point method, to prove a more general result which takes into account the delay terms. We also use both the second and the third method above, the spectral method, to prove the delayindependent bound from [15, 20–22]. We then evaluate the conservativeness of these two sufficient conditions. Note that the delayindependent bound has been correctly derived in [25] using the first method, the Lyapunov method. It might be of interest to explore its potential to derive a delaydependent bound.
We write the linearized version of (3) as follows. We choose a persistent state {\mathbf{V}}^{f} and perform the change of variable \mathbf{U}=\mathbf{V}{\mathbf{V}}^{f}. The linearized equation writes
\{\begin{array}{c}\frac{d}{dt}\mathbf{U}(t)={\mathbf{L}}_{0}\mathbf{U}(t)+{\tilde{\mathbf{L}}}_{1}{\mathbf{U}}_{t}\equiv \mathbf{L}{\mathbf{U}}_{t},\hfill \\ {\mathbf{U}}_{0}=\varphi \in \mathcal{C},\hfill \end{array}
(4)
where the linear operator {\tilde{\mathbf{L}}}_{1} is given by
\{\begin{array}{c}{\tilde{\mathbf{L}}}_{1}:\mathcal{C}\u27f6\mathcal{F},\hfill \\ \varphi \to {\int}_{\Omega}\mathbf{J}(\cdot ,\overline{\mathbf{r}})D\mathbf{S}({\mathbf{V}}^{f}(\overline{\mathbf{r}}))\varphi (\overline{\mathbf{r}},\mathit{\tau}(\cdot ,\overline{\mathbf{r}}))\phantom{\rule{0.2em}{0ex}}d\overline{\mathbf{r}}.\hfill \end{array}
It is also convenient to define the following operator:
\{\begin{array}{c}\tilde{\mathbf{J}}\stackrel{\mathrm{def}}{\equiv}\mathbf{J}\cdot DS\left({\mathbf{V}}^{f}\right):\mathcal{F}\u27f6\mathcal{F},\hfill \\ \mathbf{U}\to {\int}_{\Omega}\mathbf{J}(\cdot ,\overline{\mathbf{r}})D\mathbf{S}({\mathbf{V}}^{f}(\overline{\mathbf{r}}))\mathbf{U}(\overline{\mathbf{r}})\phantom{\rule{0.2em}{0ex}}d\overline{\mathbf{r}}.\hfill \end{array}
3.1 Principle of linear stability analysis via characteristic values
We derive the stability of the persistent state {\mathbf{V}}^{f} (see [19]) for the equation (1) or equivalently (3) using the spectral properties of the infinitesimal generator. We prove that if the eigenvalues of the infinitesimal generator of the righthand side of (4) are in the left part of the complex plane, the stationary state \mathbf{U}=0 is asymptotically stable for equation (4). This result is difficult to prove because the spectrum (the main definitions for the spectrum of a linear operator are recalled in Appendix A) of the infinitesimal generator neither reduces to the point spectrum (set of eigenvalues of finite multiplicity) nor is contained in a cone of the complex plane C (such an operator is said to be sectorial). The ‘principle of linear stability’ is the fact that the linear stability of U is inherited by the state {\mathbf{V}}^{f} for the nonlinear equations (1) or (3). This result is stated in the Corollaries 3.7 and 3.8.
Following [27–31], we note {(\mathbf{T}(t))}_{t\ge 0} the strongly continuous semigroup of (4) on \mathcal{C} (see Definition A.3 in Appendix A) and A its infinitesimal generator. By definition, if U is the solution of (4) we have {\mathbf{U}}_{t}=\mathbf{T}(t)\varphi. In order to prove the linear stability, we need to find a condition on the spectrum \Sigma (\mathbf{A}) of A which ensures that \mathbf{T}(t)\to 0 as t\to \infty.
Such a ‘principle’ of linear stability was derived in [29, 30]. Their assumptions implied that \Sigma (\mathbf{A}) was a pure point spectrum (it contained only eigenvalues) with the effect of simplifying the study of the linear stability because, in this case, one can link estimates of the semigroup T to the spectrum of A. This is not the case here (see Proposition 3.4).
When the spectrum of the infinitesimal generator does not only contain eigenvalues, we can use the result in [27], Chapter 4, Theorem 3.10 and Corollary 3.12] for eventually norm continuous semigroups (see Definition A.4 in Appendix A) which links the growth bound of the semigroup to the spectrum of A:
Thus, U is uniformly exponentially stable for (4) if and only if
sup\Re \Sigma (\mathbf{A})<0.
We prove in Lemma 3.6 (see below) that {(\mathbf{T}(t))}_{t\ge 0} is eventually norm continuous. Let us start by computing the spectrum of A.
3.1.1 Computation of the spectrum of A
In this section we use {\mathbf{L}}_{1} for {\tilde{\mathbf{L}}}_{1} for simplicity.
Definition 3.1 We define{\mathbf{L}}_{\lambda}\in \mathcal{L}(\mathcal{F})for\lambda \in \mathbf{C}by:
{\mathbf{L}}_{\lambda}\mathbf{U}\equiv \mathbf{L}\left({e}^{\lambda \theta}\mathbf{U}\right)\equiv {\mathbf{L}}_{0}\mathbf{U}+{\mathbf{L}}_{1}\left({e}^{\lambda \theta}\mathbf{U}\right)={\mathbf{L}}_{0}\mathbf{U}+\mathbf{J}(\lambda )\mathbf{U},\phantom{\rule{1em}{0ex}}\theta \to {e}^{\lambda \theta}\mathbf{U}\in \mathcal{C},
where\mathbf{J}(\lambda )is the compact (it is a HilbertSchmidt operator see[32]Chapter X.2]) operator
\mathbf{J}(\lambda ):\mathbf{U}\to {\int}_{\Omega}\phantom{\rule{0.25em}{0ex}}\mathbf{J}(\cdot ,\overline{\mathbf{r}})DS({\mathbf{V}}^{f}(\overline{\mathbf{r}})){e}^{\lambda \mathit{\tau}(\cdot ,\overline{\mathbf{r}})}\mathbf{U}(\overline{\mathbf{r}})\phantom{\rule{0.2em}{0ex}}d\overline{\mathbf{r}}.
We now apply results from the theory of delay equations in Banach spaces (see [27, 28, 31]) which give the expression of the infinitesimal generator \mathbf{A}\varphi =\dot{\varphi} as well as its domain of definition
Dom(\mathbf{A})=\{\varphi \in \mathcal{C}\mid \dot{\varphi}\in \mathcal{C}\text{and}\dot{\varphi}({0}^{})={\mathbf{L}}_{0}\varphi (0)+{\mathbf{L}}_{1}\varphi \}.
The spectrum \Sigma (\mathbf{A}) consists of those \lambda \in \mathbf{C} such that the operator \Delta (\lambda ) of \mathcal{L}(\mathcal{F}) defined by \Delta (\lambda )=\lambda \mathrm{Id}+{\mathbf{L}}_{0}\mathbf{J}(\lambda ) is noninvertible. We use the following definition:
Definition 3.2 (Characteristic values (CV))
The characteristic values of A are the λs such that\Delta (\lambda )has a kernel which is not reduced to 0, that is, is not injective.
It is easy to see that the CV are the eigenvalues of A.
There are various ways to compute the spectrum of an operator in infinite dimensions. They are related to how the spectrum is partitioned (for example, continuous spectrum, point spectrum…). In the case of operators which are compact perturbations of the identity such as Fredholm operators, which is the case here, there is no continuous spectrum. Hence the most convenient way for us is to compute the point spectrum and the essential spectrum (see Appendix A). This is what we achieve next.
Remark 1 In finite dimension (that is, dim\mathcal{F}<\infty), the spectrum of A consists only of CV. We show that this is not the case here.
Notice that most papers dealing with delayed neural field equations only compute the CV and numerically assess the linear stability (see [9, 24, 33]).
We now show that we can link the spectral properties of A to the spectral properties of {\mathbf{L}}_{\lambda}. This is important since the latter operator is easier to handle because it acts on a Hilbert space. We start with the following lemma (see [34] for similar results in a different setting).
Lemma 3.3\lambda \in {\Sigma}_{\mathit{ess}}(\mathbf{A})\iff \lambda \in {\Sigma}_{\mathit{ess}}({\mathbf{L}}_{\lambda}).
Proof Let us define the following operator. If \lambda \in \mathbf{C}, we define {\mathcal{T}}_{\lambda}\in \mathcal{L}(\mathcal{C},\mathcal{F}) by {\mathcal{T}}_{\lambda}(\varphi )=\varphi (0)+\mathbf{L}({\int}_{\cdot}^{0}{e}^{\lambda (\cdot s)}\varphi (s)\phantom{\rule{0.2em}{0ex}}ds)\varphi \in \mathcal{C}. From [28], Lemma 34], {\mathcal{T}}_{\lambda} is surjective and it is easy to check that \varphi \in \mathcal{R}(\lambda \mathrm{Id}\mathbf{A}) iif {\mathcal{T}}_{\lambda}(\varphi )\in \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda}), see [28], Lemma 35]. Moreover \mathcal{R}(\lambda \mathrm{Id}\mathbf{A}) is closed in \mathcal{C} iff \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda}) is closed in \mathcal{F}, see [28], Lemma 36].
Let us now prove the lemma. We already know that \mathcal{R}(\lambda \mathrm{Id}\mathbf{A}) is closed in \mathcal{C} if \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda}) is closed in \mathcal{F}. Also, we have \mathcal{N}(\lambda \mathrm{Id}\mathbf{A})=\{\theta \to {e}^{\theta \lambda}\mathbf{U},\mathbf{U}\in \mathcal{N}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})\}, hence dim\mathcal{N}(\lambda \mathrm{Id}\mathbf{A})<\infty iif dim\mathcal{N}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})<\infty. It remains to check that codim\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})<\infty iif codim\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})<\infty.
Suppose that codim\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})<\infty. There exist {\varphi}_{1},\dots ,{\varphi}_{N}\in \mathcal{C} such that \mathcal{C}=Span({\varphi}_{i})+\mathcal{R}(\lambda \mathrm{Id}\mathbf{A}). Consider {\mathbf{U}}_{i}\equiv {\mathcal{T}}_{\lambda}({\varphi}_{i})\in \mathcal{F}. Because {\mathcal{T}}_{\lambda} is surjective, for all \mathbf{U}\in \mathcal{F}, there exists \psi \in \mathcal{C} satisfying \mathbf{U}={\mathcal{T}}_{\lambda}(\psi ). We write \psi ={\sum}_{i=1}^{N}{x}_{i}{\varphi}_{i}+f, f\in \mathcal{R}(\lambda \mathrm{Id}\mathbf{A}). Then \mathbf{U}={\sum}_{i=1}^{N}{x}_{i}{\mathbf{U}}_{i}+{\mathcal{T}}_{\lambda}(f) where {\mathcal{T}}_{\lambda}(f)\in \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda}), that is, codim\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})<\infty.
Suppose that codim\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda})<\infty. There exist {\mathbf{U}}_{1},\dots ,{\mathbf{U}}_{N}\in \mathcal{F} such that \mathcal{F}=Span({\mathbf{U}}_{i})+\mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda}). As {\mathcal{T}}_{\lambda} is surjective for all i=1,\dots ,N there exists {\varphi}_{i}\in \mathcal{C} such that {\mathbf{U}}_{i}={\mathcal{T}}_{\lambda}({\varphi}_{i}). Now consider \psi \in \mathcal{C}. {\mathcal{T}}_{\lambda}(\psi ) can be written {\mathcal{T}}_{\lambda}(\psi )={\sum}_{i=1}^{N}{x}_{i}{\mathbf{U}}_{i}+\tilde{\mathbf{U}} where \tilde{\mathbf{U}}\in \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda}). But \psi {\sum}_{i=1}^{N}{x}_{i}{\varphi}_{i}\in \mathcal{R}(\lambda \mathrm{Id}\mathbf{A}) because {\mathcal{T}}_{\lambda}(\psi {\sum}_{i=1}^{N}{x}_{i}{\varphi}_{i})=\tilde{\mathbf{U}}\in \mathcal{R}(\lambda \mathrm{Id}{\mathbf{L}}_{\lambda}). It follows that codim\mathcal{R}(\lambda \mathrm{Id}\mathbf{A})<\infty. □
Lemma 3.3 is the key to obtain \Sigma (\mathbf{A}). Note that it is true regardless of the form of L and could be applied to other types of delays in neural field equations. We now prove the important following proposition.
Proposition 3.4 A satisfies the following properties:

1.
{\Sigma}_{\mathit{ess}}(\mathbf{A})=\Sigma ({\mathbf{L}}_{0})
.

2.
is at most countable.

3.
\Sigma (\mathbf{A})=\Sigma ({\mathbf{L}}_{0})\cup CV
.

4.
For \lambda \in \Sigma (\mathbf{A})\setminus \Sigma ({\mathbf{L}}_{0}), the generalized eigenspace {\bigcup}_{k}\mathcal{N}{(\lambda I\mathbf{A})}^{k} is finite dimensional and \exists k\in \mathbb{N}, \mathcal{C}=\mathcal{N}({(\lambda I\mathbf{A})}^{k})\oplus \mathcal{R}({(\lambda I\mathbf{A})}^{k}).
Proof

1.
\lambda \in {\Sigma}_{\mathit{ess}}(\mathbf{A})\iff \lambda \in {\Sigma}_{\mathit{ess}}({\mathbf{L}}_{\lambda})={\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0}+\mathbf{J}(\lambda ))
. We apply [35], Theorem IV.5.26]. It shows that the essential spectrum does not change under compact perturbation. As \mathbf{J}(\lambda )\in \mathcal{L}(\mathcal{F}) is compact, we find {\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0}+\mathbf{J}(\lambda ))={\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0}).
Let us show that {\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0})=\Sigma ({\mathbf{L}}_{0}). The assertion ‘⊂’ is trivial. Now if \lambda \in \Sigma ({\mathbf{L}}_{0}), for example, \lambda ={l}_{1}, then \lambda \mathrm{Id}+{\mathbf{L}}_{0}=diag(0,{l}_{1}+{l}_{2},\dots ).
Then \mathcal{R}(\lambda \mathrm{Id}+{\mathbf{L}}_{0}) is closed but {L}^{2}(\Omega ,\mathbb{R})\times \{0\}\times \cdots \times \{0\}\subset \mathcal{N}(\lambda \mathrm{Id}+{\mathbf{L}}_{0}). Hence dim\mathcal{N}(\lambda \mathrm{Id}+{\mathbf{L}}_{0})=\infty. Also \mathcal{R}(\lambda \mathrm{Id}+{\mathbf{L}}_{0})=\{0\}\times {L}^{2}(\Omega ,{\mathbb{R}}^{p1}), hence codim\mathcal{R}(\lambda \mathrm{Id}+{\mathbf{L}}_{0})=\infty. Hence, according to Definition A.7, \lambda \in {\Sigma}_{\mathit{ess}}({\mathbf{L}}_{0}).

2.
We apply [35], Theorem IV.5.33] stating (in its first part) that if {\Sigma}_{\mathit{ess}}(\mathbf{A}) is at most countable, so is \Sigma (\mathbf{A}).

3.
We apply again [35], Theorem IV.5.33] stating that if {\Sigma}_{\mathit{ess}}(\mathbf{A}) is at most countable, any point in \Sigma (\mathbf{A})\setminus {\Sigma}_{\mathit{ess}}(\mathbf{A}) is an isolated eigenvalue with finite multiplicity.

4.
Because {\Sigma}_{\mathit{ess}}(\mathbf{A})\subset {\Sigma}_{\mathit{ess},\mathit{Arino}}(\mathbf{A}), we can apply [28], Theorem 2] which precisely states this property. □
As an example, Figure 1 shows the first 200 eigenvalues computed for a very simple model onedimensional model. We notice that they accumulate at \lambda =1 which is the essential spectrum. These eigenvalues have been computed using TraceDDE, [36], a very efficient method for computing the CVs.
Last but not least, we can prove that the CVs are almost all, that is, except for possibly a finite number of them, located on the left part of the complex plane. This indicates that the unstable manifold is always finite dimensional for the models we are considering here.
Corollary 3.5\mathit{Card}\Sigma (\mathbf{A})\cap \{\lambda \in \mathbb{C},\Re \lambda >l\}<\inftywherel={min}_{i}{l}_{i}.
Proof If \lambda =\rho +i\omega \in \Sigma (\mathbf{A}) and \rho >l, then λ is a CV, that is, \mathcal{N}(\mathrm{Id}{(\lambda \mathrm{Id}+{\mathbf{L}}_{0})}^{1}\mathbf{J}(\lambda ))\ne \varnothing stating that 1\in {\Sigma}_{P}({(\lambda \mathrm{Id}+{\mathbf{L}}_{0})}^{1}\mathbf{J}(\lambda )) ({\Sigma}_{P} denotes the point spectrum).
But for λ big enough since is bounded.
Hence, for λ large enough 1\notin {\Sigma}_{P}({(\lambda \mathrm{Id}+{\mathbf{L}}_{0})}^{1}\mathbf{J}(\lambda )), which holds by the spectral radius inequality. This relationship states that the CVs λ satisfying \Re \lambda >l are located in a bounded set of the right part of \mathbb{C}; given that the CV are isolated, there is a finite number of them. □
3.1.2 Stability results from the characteristic values
We start with a lemma stating regularity for {(\mathbf{T}(t))}_{t\ge 0}:
Lemma 3.6 The semigroup{(\mathbf{T}(t))}_{t\ge 0}of (4) is norm continuous on\mathcal{C}fort>{\tau}_{m}.
Proof We first notice that {\mathbf{L}}_{0} generates a norm continuous semigroup (in fact a group) \mathbf{S}(t)={e}^{t{\mathbf{L}}_{0}} on \mathcal{F} and that {\tilde{\mathbf{L}}}_{1} is continuous from \mathcal{C} to \mathcal{F}. The lemma follows directly from [27], Theorem VI.6.6]. □
Using the spectrum computed in Proposition 3.4, the previous lemma and the formula (5), we can state the asymptotic stability of the linear equation (4). Notice that because of Corollary 3.5, the supremum in (5) is in fact a max.
Corollary 3.7 (Linear stability)
Zero is asymptotically stable for (4) if and only ifmax\Re {\Sigma}_{p}(\mathbf{A})<0.
We conclude by showing that the computation of the characteristic values of A is enough to state the stability of the stationary solution {\mathbf{V}}^{f}.
Corollary 3.8 Ifmax\Re {\Sigma}_{p}(\mathbf{A})<0, then the persistent solution{\mathbf{V}}^{f}of (3) is asymptotically stable.
Proof Using \mathbf{U}=\mathbf{V}{\mathbf{V}}^{f}, we write (3) as \dot{\mathbf{U}}(t)=\mathbf{L}{\mathbf{U}}_{t}+G({\mathbf{U}}_{t}). The function G is {C}^{2} and satisfies G(0)=0, DG(0)=0 and {\parallel G({\mathbf{U}}_{t})\parallel}_{\mathcal{C}}=O({\parallel {\mathbf{U}}_{t}\parallel}_{\mathcal{C}}^{2}). We next apply a variation of constant formula. In the case of delayed equations, this formula is difficult to handle because the semigroup T should act on noncontinuous functions as shown by the formula {\mathbf{U}}_{t}=\mathbf{T}(t)\varphi +{\int}_{0}^{t}\mathbf{T}(ts)[{X}_{0}G({\mathbf{U}}_{s})]\phantom{\rule{0.2em}{0ex}}ds, where {X}_{0}(\theta )=0 if \theta <0 and {X}_{0}(0)=1. Note that the function \theta \to {X}_{0}(\theta )G({\mathbf{U}}_{s}) is not continuous at \theta =0.
It is however possible (note that a regularity condition has to be verified but this is done easily in our case) to extend (see [34]) the semigroup \mathbf{T}(t) to the space \mathcal{F}\times {\mathrm{L}}^{2}([{\tau}_{m},0],\mathcal{F}). We note \tilde{\mathbf{T}}(t) this extension which has the same spectrum as \mathbf{T}(t). Indeed, we can consider integral solutions of (4) with initial condition {\mathbf{U}}_{0} in {\mathrm{L}}^{2}([{\tau}_{m},0],\mathcal{F}). However, as {\mathbf{L}}_{0}{\mathbf{U}}_{0}(0) has no meaning because \varphi \to \varphi (0) is not continuous in {\mathrm{L}}^{2}([{\tau}_{m},0],\mathcal{F}), the linear problem (4) is not wellposed in this space. This is why we have to extend the state space in order to make the linear operator in (4) continuous. Hence the correct state space is \mathcal{F}\times {\mathrm{L}}^{2}([{\tau}_{m},0],\mathcal{F}) and any function \varphi \in \mathcal{C} is represented by the vector (\varphi (0),\varphi ). The variation of constant formula becomes:
{\mathbf{U}}_{t}=\mathbf{T}(t)\varphi +{\int}_{0}^{t}{\pi}_{2}(\tilde{\mathbf{T}}(ts)(G({\mathbf{U}}_{s}),0\left)\right)\phantom{\rule{0.2em}{0ex}}ds,
where {\pi}_{2} is the projector on the second component.
Now we choose \omega =max\Re {\Sigma}_{p}(\mathbf{A})/2>0 and the spectral mapping theorem implies that there exists M>0 such that and . It follows that {\parallel {\mathbf{U}}_{t}\parallel}_{\mathcal{C}}\le M{e}^{\omega t}({\parallel {U}_{0}\parallel}_{\mathcal{C}}+{\int}_{0}^{t}{e}^{\omega s}{\parallel G({\mathbf{U}}_{s})\parallel}_{\mathcal{F}}\phantom{\rule{0.2em}{0ex}}ds) and from Theorem 2.2, {\parallel G({\mathbf{U}}_{t})\parallel}_{\mathcal{C}}=O(1), which yields {\parallel {\mathbf{U}}_{t}\parallel}_{\mathcal{C}}=O({e}^{\omega t}) and concludes the proof. □
Finally, we can use the CVs to derive a sufficient stability result.
Proposition 3.9 If{\parallel \mathbf{J}\cdot DS({\mathbf{V}}^{f})\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbf{R}}^{p\times p})}<{min}_{i}{l}_{i}then{\mathbf{V}}^{f}is asymptotically stable for (3).
Proof Suppose that a CV λ of positive real part exists, this gives a vector in the Kernel of \Delta (\lambda ). Using straightforward estimates, it implies that {min}_{i}{l}_{i}\le {\parallel \mathbf{J}\cdot DS({\mathbf{V}}^{f})\parallel}_{\mathcal{F}}, a contradiction. □
3.1.3 Generalization of the model
In the description of our model, we have pointed out a possible generalization. It concerns the linear response of the chemical synapses, that is, the lefthand side (\frac{d}{dt}+{l}_{i}) of (1). It can be replaced by a polynomial in \frac{d}{dt}, namely {P}_{i}(\frac{d}{dt}), where the zeros of the polynomials {P}_{i} have negative real parts. Indeed, in this case, when J is small, the network is stable. We obtain a diagonal matrix \mathbf{P}(\frac{d}{dt}) such that \mathbf{P}(0)={\mathbf{L}}_{0} and change the initial condition (as in the theory of ODEs) while the history space becomes {\mathcal{C}}^{{d}_{s}}([{\tau}_{m},0],\mathcal{F}) where {d}_{s}+1={max}_{i}deg{P}_{i}. Having all this in mind equation (1) writes
Introducing the classical variable \mathcal{V}(t)\equiv [\mathbf{V}(t),{\mathbf{V}}^{\prime}(t),\dots ,{\mathbf{V}}^{({d}_{s})}(t)], we rewrite (6) as
\dot{\mathcal{V}}(t)={\mathcal{L}}_{0}\mathcal{V}(t)+{\mathcal{L}}_{1}\mathcal{S}({\mathcal{V}}_{t})+{\mathcal{I}}^{\mathit{ext}},
(7)
where {\mathcal{L}}_{0} is the Vandermonde (we put a minus sign in order to have a formulation very close to 1) matrix associated to P and {({\mathcal{L}}_{1})}_{k,l=1,\dots ,{d}_{s}}={({\delta}_{k={d}_{s},l=1}{\mathbf{L}}_{1})}_{k,l=1,\dots ,{d}_{s}}, {\mathcal{I}}^{\mathit{ext}}=[0,\dots ,{\mathbf{I}}^{\mathit{ext}}], \mathcal{S}(\mathcal{V})=[\mathbf{S}(\mathbf{V}(t)),\dots ,\mathbf{S}({\mathbf{V}}^{({d}_{s})})]. It appears that equation (7) has the same structure as (1): {\mathcal{L}}_{0}, {\mathcal{L}}_{1}, are bounded linear operators; we can conclude that there is a unique solution to (6). The linearized equation around a persistent states yields a strongly continuous semigroup \mathcal{T}(t) which is eventually continuous. Hence the stability is given by the sign of max\Re \Sigma (\mathcal{A}) where \mathcal{A} is the infinitesimal generator of \mathcal{T}(t). It is then routine to show that
\lambda \in \Sigma (\mathcal{A})\iff \Delta (\lambda )\equiv \mathbf{P}(\lambda )\mathbf{J}(\lambda )\text{noninvertible}.
This indicates that the essential spectrum {\Sigma}_{\mathit{ess}}(\mathcal{A}) of \mathcal{A} is equal to {\bigcup}_{i}Root({P}_{i}) which is located in the left side of the complex plane. Thus the point spectrum is enough to characterize the linear stability:
Proposition 3.10 Ifmax\Re {\Sigma}_{p}(\mathcal{A})<0the persistent solution{\mathbf{V}}^{f}of (6) is asymptotically stable.
Using the same proof as in [20], one can prove that max\Re \Sigma (\mathcal{A})<0 provided that {\parallel \mathbf{J}\cdot DS({\mathbf{V}}^{f})\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbf{R}}^{p\times p})}<{min}_{k\in \mathbb{N},\omega \in \mathbb{R}}{P}_{k}(i\omega ).
Proposition 3.11 If{\parallel \mathbf{J}\cdot DS({\mathbf{V}}^{f})\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbf{R}}^{p\times p})}<{min}_{k\in \mathbb{N},\omega \in \mathbb{R}}{P}_{k}(i\omega )then{\mathbf{V}}^{f}is asymptotically stable.
3.2 Principle of linear stability analysis via fixed point theory
The idea behind this method (see [37]) is to write (4) as an integral equation. This integral equation is then interpreted as a fixed point problem. We already know that this problem has a unique solution in {\mathcal{C}}^{0}. However, by looking at the definition of the (Lyapunov) stability, we can express the stability as the existence of a solution of the fixed point problem in a smaller space \mathcal{S}\subset {\mathcal{C}}^{0}. The existence of a solution in \mathcal{S} gives the unique solution in {\mathcal{C}}^{0}. Hence, the method is to provide conditions for the fixed point problem to have a solution in \mathcal{S}; in the two cases presented below, we use the Picard fixed point theorem to obtain these conditions. Usually this method gives conditions on the averaged quantities arising in (4) whereas a Lyapunov method would give conditions on the sign of the same quantities. There is no method to be preferred, rather both of them should be applied to obtain the best bounds.
In order to be able to derive our bounds we make the further assumption that there exists a \beta >0 such that:
{\parallel {\mathit{\tau}}^{\beta}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}<\infty .
Note that the notation {\mathit{\tau}}^{\beta} represents the matrix of elements 1/{\tau}_{ij}^{\beta}.
Remark 2 For example, in the 2D onepopulation case for\tau (\mathbf{r},\overline{\mathbf{r}})=c\parallel \mathbf{r}\overline{\mathbf{r}}\parallel, we have0\le \beta <1.
We rewrite (4) in two different integral forms to which we apply the fixed point method. The first integral form is obtained by a straightforward use the variationofparameters formula. It reads
\{\begin{array}{ll}({\mathbf{P}}_{1}\mathbf{U})(t)=\varphi (t),& t\in [{\tau}_{m},0],\\ ({\mathbf{P}}_{1}\mathbf{U})(t)={e}^{{\mathbf{L}}_{0}t}\varphi (0)+{\int}_{0}^{t}{e}^{{\mathbf{L}}_{0}(ts)}({\tilde{\mathbf{L}}}_{1}{\mathbf{U}}_{s})\phantom{\rule{0.2em}{0ex}}ds,& t\ge 0.\end{array}
(8)
The second integral form is less obvious. Let us define
\mathbf{Z}(\mathbf{r},t)={\int}_{\Omega}d\overline{\mathbf{r}}\phantom{\rule{0.2em}{0ex}}\tilde{\mathbf{J}}(\mathbf{r},\overline{\mathbf{r}}){\int}_{t\mathit{\tau}(\mathbf{r},\overline{\mathbf{r}})}^{t}ds\phantom{\rule{0.2em}{0ex}}\mathbf{U}(\overline{\mathbf{r}},s).
Note the slight abuse of notation, namely {(\tilde{\mathbf{J}}(\mathbf{r},\overline{\mathbf{r}}){\int}_{t\mathit{\tau}(\mathbf{r},\overline{\mathbf{r}})}^{t}ds\phantom{\rule{0.2em}{0ex}}\mathbf{U}(\overline{\mathbf{r}},s))}_{i}={\sum}_{j}{\tilde{\mathbf{J}}}_{ij}(\mathbf{r},\overline{\mathbf{r}}){\int}_{t{\mathit{\tau}}_{ij}(\mathbf{r},\overline{\mathbf{r}})}^{t}ds\phantom{\rule{0.2em}{0ex}}{\mathbf{U}}_{j}(\overline{\mathbf{r}},s).
Lemma B.3 in Appendix B.2 yields the upperbound {\parallel \mathbf{Z}(t)\parallel}_{\mathcal{F}}\le {{\tau}_{m}}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}{sup}_{s\in [t{\tau}_{m},t]}{\parallel \mathbf{U}(s)\parallel}_{\mathcal{F}}. This shows that ∀t, \mathbf{Z}(t)\in \mathcal{F}.
Hence we propose the second integral form:
We have the following lemma.
Lemma 3.12 The formulation (9) is equivalent to (4).
Proof The idea is to write the linearized equation as:
\{\begin{array}{l}\frac{d}{dt}\mathbf{U}=({\mathbf{L}}_{0}+\tilde{\mathbf{J}})\mathbf{U}\frac{d}{dt}\mathbf{Z}(t),\\ {\mathbf{U}}_{0}=\varphi .\end{array}
(10)
By the variationofparameters formula we have:
\mathbf{U}(t)={e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})t}\mathbf{U}(0){\int}_{0}^{t}{e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})(ts)}\frac{d}{ds}\mathbf{Z}(s)\phantom{\rule{0.2em}{0ex}}ds.
We then use an integration by parts:
{\int}_{0}^{t}{e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})(ts)}\frac{d}{ds}\mathbf{Z}(s)\phantom{\rule{0.2em}{0ex}}ds=\mathbf{Z}(t){e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})t}\mathbf{Z}(0)+{\int}_{0}^{t}(\tilde{\mathbf{J}}{\mathbf{L}}_{0}){e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})(ts)}\mathbf{Z}(s)\phantom{\rule{0.2em}{0ex}}ds
which allows us to conclude. □
Using the two integral formulations of (4) we obtain sufficient conditions of stability, as stated in the following proposition:
Proposition 3.13 If one of the following two conditions is satisfied:

1.
max\Re \Sigma (\tilde{\mathbf{J}}{\mathbf{L}}_{0})<0
and there exist \alpha <1, \beta >0 such that
where\frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}represents the matrix of elements\frac{{\tilde{J}}_{ij}}{{\tau}_{ij}^{\beta}},

2.
{\parallel \tilde{\mathbf{J}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}<{min}_{i}{l}_{i}
,
then{\mathbf{V}}^{f}is asymptotically stable for (3).
Proof We start with the first condition.
The problem (4) is equivalent to solving the fixed point equation \mathbf{U}={\mathbf{P}}_{2}\mathbf{U} for an initial condition \varphi \in {\mathcal{C}}^{0}. Let us define \mathcal{B}={C}^{0}([{\tau}_{m},\infty ),\mathcal{F}) with the supremum norm written {\parallel \cdot \parallel}_{\infty ,\mathcal{F}}, as well as
{\mathcal{S}}_{\varphi}=\{\psi \in \mathcal{B},\psi {}_{[{\tau}_{m},0]}=\varphi \text{and}\psi \to 0\text{as}t\to \infty \}.
We define {\mathbf{P}}_{2} on {\mathcal{S}}_{\varphi}.
For all \psi \in {\mathcal{S}}_{\varphi} we have {\mathbf{P}}_{2}\psi \in \mathcal{B} and ({\mathbf{P}}_{2}\psi )(0)=\varphi (0). We want to show that {\mathbf{P}}_{2}{\mathcal{S}}_{\varphi}\subset {\mathcal{S}}_{\varphi}. We prove two properties.
1.
{\mathbf{P}}_{2}\psi
tends to zero at infinity.
Choose \psi \in {\mathcal{S}}_{\varphi}.
Using Corollary B.3, we have \mathbf{Z}(t)\to 0 as t\to \infty.
Let 0<T<t, we also have
\begin{array}{r}{\parallel {\int}_{0}^{t}(\tilde{\mathbf{J}}{\mathbf{L}}_{0}){e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})(ts)}\mathbf{Z}(s)\parallel}_{\mathcal{F}}\\ \phantom{\rule{1em}{0ex}}\le {\parallel {\int}_{0}^{T}(\tilde{\mathbf{J}}{\mathbf{L}}_{0}){e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})(ts)}\mathbf{Z}(s)\phantom{\rule{0.2em}{0ex}}ds\parallel}_{\mathcal{F}}+{\parallel {\int}_{T}^{t}(\tilde{\mathbf{J}}{\mathbf{L}}_{0}){e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})(ts)}\mathbf{Z}(s)\phantom{\rule{0.2em}{0ex}}ds\parallel}_{\mathcal{F}}.\end{array}
For the first term we write:
Similarly, for the second term we write
Now for a given \epsilon >0 we choose T large enough so that \alpha {sup}_{s\in [T{\tau}_{m},\infty )}{\parallel \psi (s)\parallel}_{\mathcal{F}}<\epsilon /2. For such a T we choose {t}^{\ast} large enough so that for t>{t}^{\ast}. Putting all this together, for all t>{t}^{\ast}:
{\parallel {\int}_{0}^{t}(\tilde{\mathbf{J}}{\mathbf{L}}_{0}){e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})(ts)}\mathbf{Z}(s)\phantom{\rule{0.2em}{0ex}}ds\parallel}_{\mathcal{F}}\le \epsilon .
From (9), it follows that {\mathbf{P}}_{2}\psi \to 0 when t\to \infty.
Since {\mathbf{P}}_{2}\psi is continuous and has a limit when t\to \infty it is bounded and therefore {\mathbf{P}}_{2}:{\mathcal{S}}_{\varphi}\to {\mathcal{S}}_{\varphi}.
2.
{\mathbf{P}}_{2}
is contracting on
{\mathcal{S}}_{\varphi}
.
Using (9) for all {\psi}_{1},{\psi}_{2}\in {\mathcal{S}}_{\varphi} we have
We conclude from Picard theorem that the operator {\mathbf{P}}_{2} has a unique fixed point in {\mathcal{S}}_{\varphi}.
There remains to link this fixed point to the definition of stability and first show that
\forall \epsilon >0\phantom{\rule{0.25em}{0ex}}\exists \delta \text{such that}{\parallel \varphi \parallel}_{\mathcal{C}}<\delta \text{implies}{\parallel \mathbf{U}(t,\varphi )\parallel}_{\mathcal{C}}<\epsilon ,t\ge 0,
where \mathbf{U}(t,\varphi ) is the solution of (4).
Let us choose \epsilon >0 and M\ge 1 such that . M exists because, by hypothesis, max\Re \Sigma (\tilde{\mathbf{J}}{\mathbf{L}}_{0})<0. We then choose δ satisfying
M(1+{{\tau}_{m}}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})})\delta <\epsilon (1\alpha ),
(11)
and \varphi \in \mathcal{C} such that {\parallel \varphi \parallel}_{\mathcal{C}}\le \delta. Next define
{\mathcal{S}}_{\varphi ,\epsilon}=\{\psi \in \mathcal{B},{\parallel \psi \parallel}_{\infty ,\mathcal{F}}\le \epsilon ,\psi {}_{[{\tau}_{m},0]}=\varphi \text{and}\psi \to 0\text{as}t\to \infty \}\subset {\mathcal{S}}_{\varphi}.
We already know that {\mathbf{P}}_{2} is a contraction on {\mathcal{S}}_{\varphi ,\epsilon} (which is a complete space). The last thing to check is {\mathbf{P}}_{2}{\mathcal{S}}_{\varphi ,\epsilon}\subset {\mathcal{S}}_{\varphi ,\epsilon}, that is \forall \psi \in {\mathcal{S}}_{\varphi ,\epsilon}, {\parallel {\mathbf{P}}_{2}\psi \parallel}_{\infty ,\mathcal{F}}<\epsilon. Using Lemma B.3 in Appendix B.2:
Thus {\mathbf{P}}_{2} has a unique fixed point {\mathbf{U}}^{\varphi ,\epsilon} in {\mathcal{S}}_{\varphi ,\epsilon}\phantom{\rule{0.25em}{0ex}}\forall \varphi ,\epsilon which is the solution of the linear delayed differential equation, that is,
\begin{array}{r}\forall \epsilon ,\exists \delta <\epsilon \text{(from (11))},\forall \varphi \in \mathcal{C},\parallel \varphi \parallel \le \delta \\ \phantom{\rule{1em}{0ex}}\Rightarrow \phantom{\rule{1em}{0ex}}\forall t>0,{\parallel {\mathbf{U}}_{t}^{\varphi ,\epsilon}\parallel}_{\mathcal{C}}\le \epsilon \text{and}{\mathbf{U}}^{\varphi ,\epsilon}(t)\to 0\text{in}\mathcal{F}.\end{array}
As {\mathbf{U}}^{\varphi ,\epsilon}(t)\to 0 in \mathcal{F} implies {\mathbf{U}}_{t}^{\varphi ,\epsilon}\to 0 in \mathcal{C}, we have proved the asymptotic stability for the linearized equation.
The proof of the second property is straightforward. If 0 is asymptotically stable for (4) all the CV are negative and Corollary 3.8 indicates that {\mathbf{V}}^{f} is asymptotically stable for (3).
The second condition says that {\mathbf{P}}_{1}\psi ={e}^{{\mathbf{L}}_{0}t}\varphi (0)+{\int}_{0}^{t}{e}^{{\mathbf{L}}_{0}(ts)}({\tilde{\mathbf{L}}}_{1}\psi )(s)\phantom{\rule{0.2em}{0ex}}ds is a contraction because .
The asymptotic stability follows using the same arguments as in the case of {\mathbf{P}}_{2}. □
We next simplify the first condition of the previous proposition to make it more amenable to numerics.
Corollary 3.14 Suppose that\forall t\ge 0, \epsilon >0.
If there exist\alpha <1, \beta >0such that{\tau}_{m}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}(1+\frac{{M}_{\epsilon}}{\epsilon}{\parallel \tilde{\mathbf{J}}{\mathbf{L}}_{0}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})})\le \alpha, then{\mathbf{V}}^{f}is asymptotically stable.
Proof This corollary follows immediately from the following upperbound of the integral . Then if there exists \alpha <1, \beta >0 such that {\tau}_{m}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}(1+\frac{{M}_{\epsilon}}{\epsilon}{\parallel \tilde{\mathbf{J}}{\mathbf{L}}_{0}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})})\le \alpha, it implies that condition 1 in Proposition 3.13 is satisfied, from which the asymptotic stability of {\mathbf{V}}^{f} follows. □
Notice that \epsilon >0 is equivalent to max\Re \Sigma ({\mathbf{L}}_{0}+\tilde{\mathbf{J}})<0. The previous corollary is useful in at least the following cases:

If \tilde{\mathbf{J}}{\mathbf{L}}_{0} is diagonalizable, with associated eigenvalues/eigenvectors: {\lambda}_{n}\in \mathbb{C}, {e}_{n}\in \mathcal{F}, then \tilde{\mathbf{J}}{\mathbf{L}}_{0}={\sum}_{n}{e}^{{\lambda}_{n}t}{e}_{n}\otimes {e}_{n} and .

If {\mathbf{L}}_{0}={l}_{0}\mathrm{Id} and the range of \tilde{\mathbf{J}} is finite dimensional: \tilde{\mathbf{J}}(\mathbf{r},{\mathbf{r}}^{\prime})={\sum}_{k,l=1}^{N}{J}_{kl}{e}_{k}(\mathbf{r})\otimes {e}_{l}({\mathbf{r}}^{\prime}) where {({e}_{k})}_{k\in \mathbb{N}} is an orthonormal basis of \mathcal{F}, then {e}^{(\tilde{\mathbf{J}}{\mathbf{L}}_{0})t}={e}^{{l}_{0}\cdot \mathrm{Id}\cdot t}{e}^{\tilde{\mathbf{J}}t} and . Let us write J={({J}_{kl})}_{k,l=1,\dots ,N} the matrix associated to \tilde{\mathbf{J}} (see above). Then {e}^{\tilde{\mathbf{J}}t} is also a compact operator with finite range and . Finally, it gives .

If \tilde{\mathbf{J}}{\mathbf{L}}_{0} is selfadjoint, then it is diagonalizable and we can chose \epsilon =max\Re \Sigma ({\mathbf{L}}_{0}+\tilde{\mathbf{J}}), {M}_{\epsilon}=1.
Remark 3 If we suppose that we have higher order time derivatives as in Section 3.1.3, we can write the linearized equation as
\dot{\mathcal{U}}(t)={\mathcal{L}}_{0}\mathcal{U}(t)+\tilde{{\mathcal{L}}_{1}}{\mathcal{U}}_{t}.
(12)
Suppose that{\mathcal{L}}_{0}is diagonalizable thenwhere{\parallel \mathcal{U}\parallel}_{{(\mathcal{F})}^{{d}_{s}}}\equiv {\sum}_{k=1}^{{d}_{s}}{\parallel {\mathcal{U}}_{k}\parallel}_{\mathcal{F}}andmin\Re \Sigma ({\mathcal{L}}_{0})={max}_{k}\Re Root({P}_{k}). Also notice that\tilde{\mathcal{J}}={\tilde{\mathcal{L}}}_{1}{}_{\mathcal{F}}, . Then using the same functionals as in the proof of Proposition 3.13, we can find two bounds for the stability of a stationary state{\mathbf{V}}^{f}:

Suppose thatmax\Re \Sigma (\tilde{\mathcal{J}}{\mathcal{L}}_{0})<0, that is, {\mathbf{V}}^{f}is stable for the nondelayed equation where{(\tilde{\mathcal{J}})}_{k,l=1,\dots ,{d}_{s}}={({\delta}_{k={d}_{s},l=1}\tilde{\mathbf{J}})}_{k,l=1,\dots ,{d}_{s}}. If there exist\alpha <1, \beta >0such that.

{\parallel \tilde{\mathbf{J}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}<{max}_{k}\Re Root({P}_{k})
.
To conclude, we have found an easytocompute formula for the stability of the persistent state {\mathbf{V}}^{f}. It can indeed be cumbersome to compute the CVs of neural field equations for different parameters in order to find the region of stability whereas the evaluation of the conditions in Corollary 3.14 is very easy numerically.
The conditions in Proposition 3.13 and Corollary 3.14 define a set of parameters for which {\mathbf{V}}^{f} is stable. Notice that these conditions are only sufficient conditions: if they are violated, {\mathbf{V}}^{f} may still remain stable. In order to find out whether the persistent state is destabilized we have to look at the characteristic values. Condition 1 in Proposition 3.13 indicates that if {\mathbf{V}}^{f} is a stable point for the nondelayed equation (see [18]) it is also stable for the delayedequation. Thus, according to this condition, it is not possible to destabilize a stable persistent state by the introduction of small delays, which is indeed meaningful from the biological viewpoint. Moreover this condition gives an indication of the amount of delay one can introduce without changing the stability.
Condition 2 is not very useful as it is independent of the delays: no matter what they are, the stable point {\mathbf{V}}^{f} will remain stable. Also, if this condition is satisfied there is a unique stationary solution (see [18]) and the dynamics is trivial, that is, converging to the unique stationary point.
3.3 Summary of the different bounds and conclusion
The next proposition summarizes the results we have obtained in Proposition 3.13 and Corollary 3.14 for the stability of a stationary solution.
Proposition 3.15 If one of the following conditions is satisfied:

1.
There exist \epsilon >0 such that and \alpha <1, \beta >0 such that {\tau}_{m}^{\frac{3}{2}+\beta}{\parallel \frac{\tilde{\mathbf{J}}}{{\mathit{\tau}}^{\beta}}\parallel}_{{L}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}(1+\frac{{M}_{\epsilon}}{\epsilon}{\parallel \tilde{\mathbf{J}}{\mathbf{L}}_{0}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})})\le \alpha,

2.
{\parallel \tilde{\mathbf{J}}\parallel}_{{\mathbf{L}}^{2}({\Omega}^{2},{\mathbb{R}}^{p\times p})}<{min}_{i}{l}_{i}
then{\mathbf{V}}^{f}is asymptotically stable for (3).
The only general results known so far for the stability of the stationary solutions are those of Atay and Hutt (see, for example, [20]): they found a bound similar to condition 2 in Proposition 3.15 by using the CVs, but no proof of stability was given. Their condition involves the {\mathrm{L}}^{1}norm of the connectivity function J and it was derived using the CVs in the same way as we did in the previous section. Thus our contribution with respect to condition 2 is that, once it is satisfied, the stationary solution is asymptotically stable: up until now this was numerically inferred on the basis of the CVs. We have proved it in two ways, first by using the CVs, and second by using the fixed point method which has the advantage of making the proof essentially trivial.
Condition 1 is of interest, because it allows one to find the minimal propagation delay that does not destabilize. Notice that this bound, though very easy to compute, overestimates the minimal speed. As mentioned above, the bounds in condition 1 are sufficient conditions for the stability of the stationary state {\mathbf{V}}^{f}. In order to evaluate the conservativeness of these bounds, we need to compare them to the stability predicted by the CVs. This is done in the next section.