Analysis of nonlinear noisy integrate & fire neuron models: blowup and steady states
 María J Cáceres^{1}Email author,
 José A Carrillo^{2} and
 Benoît Perthame^{3, 4}
DOI: 10.1186/2190856717
© Cáceres et al.; licensee Springer 2011
Received: 29 October 2010
Accepted: 18 July 2011
Published: 18 July 2011
Abstract
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks can be written as FokkerPlanckKolmogorov equations on the probability density of neurons, the main parameters in the model being the connectivity of the network and the noise. We analyse several aspects of the NNLIF model: the number of steady states, a priori estimates, blowup issues and convergence toward equilibrium in the linear case. In particular, for excitatory networks, blowup always occurs for initial data concentrated close to the firing potential. These results show how critical is the balance between noise and excitatory/inhibitory interactions to the connectivity parameter.
AMS Subject Classification:35K60, 82C31, 92B20.
Keywords
Leaky integrate and fire models noise blowup relaxation to steady state neural networks1 Introduction
where δ is the Dirac Delta at 0. Here, ${J}_{E}$ and ${J}_{I}$ are the strength of the synapses, ${C}_{E}$ and ${C}_{I}$ are the total number of presynaptic neurons and ${t}_{Ej}^{i}$ and ${t}_{Ij}^{i}$ are the times of the ${j}^{\mathrm{th}}$spike coming from the ${i}^{\mathrm{th}}$presynaptic neuron for excitatory and inhibitory neurons respectively. The stochastic character is embedded in the distribution of the spike times of neurons. Actually, each neuron is assumed to spike according to a stationary Poisson process with constant probability of emitting a spike per unit time ν. Moreover, all these processes are assumed to be independent between neurons. With these assumptions the average value of the current and its variance are given by ${\mu}_{C}=b\nu $ with $b={C}_{E}{J}_{E}{C}_{I}{J}_{I}$ and ${\sigma}_{C}^{2}=({C}_{E}{J}_{E}^{2}+{C}_{I}{J}_{I}^{2})\nu $. We will say that the network is averageexcitatory (averageinhibitory resp.) if $b>0$ ($b<0$ resp.).
where ${B}_{t}$ is the standard Brownian motion, that is, ${B}_{t}$ are independent Gaussian processes of zero mean and unit standard deviation. We refer to the work [5] for a nice review and discussion of the diffusion approximation which becomes exact in the infinitely large network limit, if the synaptic efficacies ${J}_{E}$ and ${J}_{I}$ are scaled appropriately with the network sizes ${C}_{E}$ and ${C}_{I}$.
for $V\le {V}_{F}$ with the jump process: $V({t}_{o}^{+})={V}_{R}$ whenever at ${t}_{0}$ the voltage achieves the threshold value $V({t}_{o}^{})={V}_{F}$; with ${V}_{L}<{V}_{R}<{V}_{F}$. Finally, we have to specify the probability of firing per unit time of the Poissonian spike train ν. This is the socalled firing rate and it should be selfconsistently computed from a fully coupled network together with some external stimuli. Therefore, the firing rate is computed as $\nu ={\nu}_{\mathit{ext}}+N(t)$, see [5] for instance, where $N(t)$ is the mean firing rate of the network. The value of $N(t)$ is then computed as the flux of neurons across the threshold or firing voltage ${V}_{F}$. We finally refer to [10] for a nice brief introduction to this subject.
Here, the righthand side is nonnegative since $p\ge 0$ over the interval $[\infty ,{V}_{F}]$ and thus, $\frac{\partial p}{\partial v}({V}_{F},t)\le 0$. In particular this imposes a limitation on the growth of the function $N\mapsto a(N)$ such that (1.6) has a unique solution N. Let us mention that a rigorous passage from the stochastic differential equation with jump processes (1.3) to the nonlinear equation (1.4)(1.6) is a very interesting issue but outside the scope of this paper, see related results in which nonlinearities are nonlocal functionals in [12, 13].
This is completely transparent in our analysis which relates on a weak form that applies to both settings.
where $b>0$ for excitatoryaverage networks and $b<0$ for inhibitoryaverage networks, ${a}_{0}>0$ and ${a}_{1}\ge 0$. Some results in this work can be obtained for some more general drift and diffusion coefficients. The precise assumptions will be specified on each result. Periodic solutions have been numerically reported and analysed in the case of the FokkerPlanck equation for uncoupled neurons in [14, 15]. Also, they study the stationary solutions for fully coupled networks obtaining and solving numerically the implicit relation that the firing rate N has to satisfy, see Section 3 for more details.
There are several other routes towards modeling of spiking neurons that are related to ours and that have been used in neurosciences, see [16]. Among them are the deterministic I&F models with adaptation which are known for fitting well experimental data [17]. General models of this type were unified and studied in terms of neuronal behaviors in [18]. In this case it is known that in the quadratic (or merely superlinear) case, the model can lead to blowup [19] in the absence of a fixed threshold. We point out that the nature of this blowup is completely different from the one discussed in this paper. One can also introduce gating variables in neuron networks and this leads to a kinetic equation, see [20] and the references therein. Another method consists in coding the information in the distribution of time elapsed between discharges [21, 22], this leads to nonlinear models that exhibit naturally periodic activity and blowup cannot happen. Nonlinear IF models are able to produce different patterns of activity and excitability types while linear models do not.
In this work we will analyse certain properties of the solutions to (1.4)(1.5) with the nonlinear term due to the coupling of the mean firing rate given by (1.6). Next section is devoted to a finite time blowup of weak solutions for (1.4)(1.6). In short, we show that whenever the value of $b>0$ is, we can find suitable initial data concentrated enough at the firing rate such that the defined weak solutions do not exist for all times. We remark that, in the same sense that Brunel in [4], we use the term asynchronous for network states for which the firing rate tends asymptotically to constant in time, while we denote by synchronous those for which this does not happen. Therefore, a possible interpretation of the blowup is that synchronization occurs in the model since the firing rate diverges for a fixed time creating possibly a strong partial synchronization, that is, a part of the network firing at the same time. Although one could also consider the blowup as an artifact of these solutions, since neurons firing arbitrarily fast is not biologically plausible. As long as the solution exists in the sense specified in Section 2, we can get a priori estimates on the ${L}_{\mathit{loc}}^{1}$norm of the firing rate. Section 3 deals with the stationary states of (1.4)(1.6). We can show that there are unique stationary states for $b\le 0$ and a constant but for $b>0$ different cases may happen: one, two or no stationary states depending on how large b is. In Section 4, we discuss the linear problem $b=0$ with a constant for which the general relative entropy principle applies implying the exponential convergence towards equilibrium. Finally by means of numerical simulations, in Section 5 we illustrate the results of previous sections about blowup and steady states. Moreover, this numerical analysis allows us to conjecture about nonlinear stability properties of the stationary states: in case of only one steady state it is asymptotically stable and in case of two different stationary solutions the results show that the one with lower firing rate is locally asymptotically stable while the one with higher stationary firing value is either unstable or with a very small region of attraction. Our results and simulations describe situations which can be identified with neuronal phenomena such as synchronization/asynchronization of a network and bistability networks. Bi and multistable networks have been used, for instance in models of visual perception and decision making [23–25]. Our analysis in Sections 23 and 5 imply that this simple model encodes complicated dynamics, in the sense that, only in terms of the connectivity parameter b, very different situations can be described with this model: blowup, no steady state, only one steady state and several stationary states.
2 Finite time blowup and a priori estimates for weak solutions
Since we study a nonlinear version of the backward Kolmogorov or FokkerPlanck equation (1.4), we start with the notion of solution:
Here, the notation ${L}^{p}(\Omega )$, $1\le p<\infty $, refers to the space of functions such that ${f}^{p}$ is integrable in Ω, while ${L}^{\infty}$ corresponds to the space of bounded functions in Ω. The set of infinitely differentiable functions in Ω is denoted by ${C}^{\infty}(\Omega )$ used as test functions in the notion of weak solution. These nonnegativity assumptions are reasonable. Indeed, for a given $N(t)$, if we were to replace N by ${N}^{+}$ in the right hand side of (1.4), we obtain a linear equation which solution is nonnegative and (1.6) gives $N\ge 0$; that this fixed point may work is a more involved issue, since we prove that there are not always global solutions, which requires functional spaces and this motives the a priori estimates that we derive at the end of this section.
The first result we show is that globalintime weak solutions of (1.4)(1.6) do not exist for all initial data in the case of an averageexcitatory network. This result holds with less stringent hypotheses on the coefficients than in (1.7) with an analogous notion of weak solution as in Definition 2.1.
Theorem 2.2 (Blowup)
is close enough to${e}^{\mu {V}_{F}}$, then there are no globalintime weak solutions to (1.4)(1.6).
leading to a contradiction.
Choosing $\mu >max(\frac{1}{b},\frac{{V}_{F}}{{a}_{m}})$, these conditions are obviously fulfilled. □
As usual for this type of blowup result similar in spirit to the classical KellerSegel model for chemotaxis [26, 27], the proof only ensures that solutions for those initial data do not exist beyond a finite maximal time of existence. It does not characterize the nature of the first singularity which occurs. It implies that either the decay at infinity is false, although not probable, implying that the time evolution of probability densities ceases to be tight, or the function $N(t)$ may become a singular measure in finite time instead of being an ${L}_{\mathit{loc}}^{1}({\mathbb{R}}^{+})$ function. Actually, in the numerical computations shown in Section 5, we observe a blowup in the value of the mean firing rate in finite time. To continue the solution will need a modification of the notion of solution introduced in Definition 2.1. It would be useful, since the firing rate does not become constant in time and consequently a possible interpretation is that synchronization occurs, a phenomena that is of interest to neuroscientists, see also the comments in the introduction and the conclusions.
Although in this paper the nature of blowup is not mathematically identified, we devote the rest of the section to prove some a priori estimates which shed some light on this direction. To be more precise, our estimates indicate that this blowup should not come from a loss of mass at $v\approx \infty $, or a lack of fast decay rate because the second moment in v is controlled uniformly in blowup situations. We obtain these a priori bounds with the help of appropriate choices of the test function ϕ in (2.1). Some of these choices are not allowed due to the growth at −∞ of the test functions. We will say that a weak solution is fastdecaying at −∞ if they are weak solutions in the sense of Definition 2.1 and the weak formulation in (2.2) holds for all test functions growing algebraically in v.
Lemma 2.3 (A priori estimates)
 (i)If$b\ge {V}_{F}{V}_{R}$, then$\begin{array}{rcl}{\int}_{\infty}^{{V}_{F}}({V}_{F}v)p(v,t)\phantom{\rule{0.2em}{0ex}}dv& \le & max({V}_{F},{\int}_{\infty}^{{V}_{F}}({V}_{F}v){p}^{0}(v)\phantom{\rule{0.2em}{0ex}}dv),\\ (b{V}_{F}+{V}_{R}){\int}_{0}^{T}N(t)\phantom{\rule{0.2em}{0ex}}dt& \le & {V}_{F}T+{\int}_{\infty}^{{V}_{F}}({V}_{F}v){p}^{0}(v)\phantom{\rule{0.2em}{0ex}}dv.\end{array}$
 (ii)If $b<{V}_{F}{V}_{R}$ then${\int}_{\infty}^{{V}_{F}}({V}_{F}v)p(v,t)\phantom{\rule{0.2em}{0ex}}dv\ge min({V}_{F},{\int}_{\infty}^{{V}_{F}}({V}_{F}v){p}^{0}(v)\phantom{\rule{0.2em}{0ex}}dv).$
To prove (i), we notice that with our condition on b, the term in $N(t)$ is nonpositive and the first result follows from Gronwall’s inequality. The second result just follows after integration in time.
Choosing now $\u03f5={\u03f5}_{0}/2$ for instance, integration in time of the last inequality leads to the desired inequality (2.6). □
 (i)If additionally a is constant, for all$t\ge 0$we have${\int}_{\infty}^{{V}_{F}}{v}^{2}p(v,t)\phantom{\rule{0.2em}{0ex}}dv\le C(1+t).$
 (ii)If additionally$bmin({V}_{F},{\int}_{\infty}^{{V}_{F}}({V}_{F}v){p}^{0}(v)\phantom{\rule{0.2em}{0ex}}dv)+{a}_{1}+b{V}_{F}+\frac{{V}_{R}^{2}{V}_{F}^{2}}{2}\le 0$, then${\int}_{\infty}^{{V}_{F}}{v}^{2}p(v,t)\phantom{\rule{0.2em}{0ex}}dv\le max({a}_{0},{\int}_{\infty}^{{V}_{F}}{v}^{2}{p}^{0}(v,t)\phantom{\rule{0.2em}{0ex}}dv).$
thanks to the first statement of Lemma 2.3(ii).
To prove (i), we just use the second statement of Lemma 2.3(ii) valid for a constant which tells us that the time integration of the righthand side grows at most linearly in time and so does ${\int}_{\infty}^{{V}_{F}}{v}^{2}p(v,t)\phantom{\rule{0.2em}{0ex}}dv$.
To prove (ii), we just use that the bracket is nonpositive and the results follows. □
3 Steady states
3.1 Generalities
Summarizing, all solutions p of the stationary problem (3.1), with the above referred regularity, are of the form given by the expression (3.2), where N is any positive solution of the implicit equation (3.3).
with ${N}_{\infty}$ the normalizing constant to unit mass over the interval $(\infty ,{V}_{F}]$.
3.2 Case of $a(N)={a}_{0}$
We are now ready to state our main result on steady states.
 (i)
For$b<0$and$b>0$small enough there is a unique steady state to (1.4)(1.6).
 (ii)Under either the condition$0<b<{V}_{F}{V}_{R},$(3.8)
 (iii)
If both (3.9) and$b>{V}_{F}{V}_{R}$hold, then there are at least two steady states to (1.4)(1.6).
 (iv)There is no steady state to (1.4)(1.6) under the high connectivity condition$b>max(2({V}_{F}{V}_{R}),2{V}_{F}I(0)).$(3.10)
Remark 3.2 It is natural to relate the absence of steady state for b large with blowup of solutions. However, Theorem 2.2 in Section 2shows this is not the only possible cause since the blowup can happen for initial data concentrated enough around${V}_{F}$independently of the value of$b>0$. See also Section 5for related numerical results.
 1.Case $b<0$: $I(N)$ is an increasing strictly convex function and thus$\underset{N\to \infty}{lim}I(N)=\infty .$
 2.Case $b>0$: $I(N)$ is a decreasing convex function. Also, it is obvious from the previous expansion (3.11) and dominated convergence theorem that$\underset{N\to \infty}{lim}I(N)=0.$
With this analysis of the function $I(N)$ we can now proof each of the statements of Theorem 3.1:
Therefore, it crosses to the function $1/N$ at a single point.
for all $N\ge {N}_{\ast}$. Therefore, by continuity of $NI(N)$ there are solutions to $NI(N)=1$ and all possible crossings of $I(N)$ and $1/N$ are on the interval $[0,{N}_{\ast}]$. We observe that both $I(N)$ and ${I}^{\prime}(N)$ converge towards the constant function $I(0)>0$ and to 0 respectively, uniformly in the interval $[0,{N}_{\ast}]$ as $b\to 0$. Therefore, for b small $NI(N)$ is strictly increasing on the interval $[0,{N}_{\ast}]$ and there is a unique solution to $NI(N)=1$. □
Proof of (ii)
Case of (3.8) The claim that there are solutions to $NI(N)=1$ for $0<b<{V}_{F}{V}_{R}$ is a direct consequence of the continuity of $I(N)$, (3.12) and (3.13).
which obviously concludes the desired inequality $I(N)\ge 1/N$ for the interval of N under consideration.
Proof of (iii) Under the condition (3.9), we have shown in the previous point the existence of an interval where $I(N)>I/N$. On one hand, $I(0)<\infty $ in (3.12) implies that $I(N)<I/N$ for N small and the condition $b>{V}_{F}{V}_{R}$ implies that $I(N)<I/N$ for N large enough due to the limit (3.13), thus there are at least two crossings between $I(N)$ and $1/N$. □
and due to the fact that I is decreasing and Inequality (3.15), we have $I(N)<I(0)<1/N$, for $N\le 2{V}_{F}/b$. In this way, we have shown that for all N, $I(N)<1/N$ and consequently there is no steady state. □
Of course, this last inequality is not optimal either for the same reason as before.
3.3 Case of $a(N)={a}_{0}+{a}_{1}N$
where we have used the change $\alpha =\frac{N}{\sqrt{2a}}$ and L’Hôpital’s rule. In the case $b<0$, we can observe again by the same proof as before that $I(N)\to \infty $ when $N\to \infty $, and thus, by continuity there is at least one solution to $NI(N)=1$. Nevertheless, it seems difficult to clarify perfectly the number of solutions due to the competing monotone functions in (3.17).
The generalization of part of Theorem 3.1 is contained in the following result. We will skip its proof since it essentially follows the same steps as before with the new ingredients just mentioned.
 (i)
Under either the condition$b<{V}_{F}{V}_{R}$, or the conditions$b>0$and$2{a}_{0}b+2{a}_{1}{V}_{R}<{({V}_{F}{V}_{R})}^{2}{V}_{R}$, then there exists at least one steady state solution to (1.4)(1.6).
 (ii)
If both$2{a}_{0}b+2{a}_{1}{V}_{R}<{({V}_{F}{V}_{R})}^{2}{V}_{R}$and$b>{V}_{F}{V}_{R}$hold, then there are at least two steady states to (1.4)(1.6).
 (iii)
There is no steady state to (1.4)(1.6) for$b>max(2({V}_{F}{V}_{R}),2{V}_{F}I(0))$.
At this point, a natural question is what happens with the stability of these steady states. In the next section we study it in the linear case, when the model presents only one steady state. An extension of the same techniques, entropy methods, to the nonlinear case is not straightforward at all. However, the results obtained in the linear case let us expect that for small connectivity parameter b the only steady state could be stable. On the other hand, numerical results presented in Section 5 give some numerical evidence of the stability/instability in different situations described by this model: only one steady state or two steady states, see that section for details.
4 Linear equation and relaxation
The estimate (4.4) follows a posteriori from the relative entropy inequality that we state below, see more comments at the end of this section. This is an indication that the hypothesis for the initial data (4.3) will easily be propagated in time giving (4.4) with a wellposedness theory of classical fastdecaying solutions at hand. These solutions to (4.1) and (4.2) might be obtained by the method developed in [28] and will be analysed elsewhere.
We prove that the solutions to (4.1) converge in large times to the unique steady state ${p}_{\infty}(v)$.
Theorem 4.1 (Exponential decay)
This result shows that no synchronization of neuronal activity can be expected when the network is not connected, since solutions tend to produce a constant firing rate, a very intuitive conclusion. Because the rate of decay is exponential, we also expect that small connectivity cannot create synchronization either, again an intuitive conclusion proved rigorously for the elapsed time structured model in [22]. Also the proof shows that two relaxation processes are involved in this effect: dissipation by the diffusion term and dissipation by the firing term. These relaxation effects are stated in the following theorem which also gives the natural bounds for the solutions to equation (4.1) (choosing $G(u)={u}^{2}$ gives the natural energy space of the system, a weighted ${L}^{2}$ space).
Theorem 4.2 (Relative entropy inequality)
Fastdecaying solutions to equation (4.1) verifying (4.4) satisfy, for any smooth convex function$G:{\mathbb{R}}^{+}\u27f6\mathbb{R}$, the inequality
To proceed further, we need an additional technical ingredient that we state and whose proof is postponed to an Appendix.
for all functions q such$\frac{q}{{p}_{\infty}}\in {H}^{1}({p}_{\infty}(v)\phantom{\rule{0.2em}{0ex}}dv)$and${\int}_{\infty}^{{V}_{F}}q(v)\phantom{\rule{0.2em}{0ex}}dv=0$.
Finally, the Gronwall lemma directly gives the result. □
To show Theorem 4.2, which was used in the proof of Theorem 4.1, we need the following preliminary computations.
we obtain (4.7).
Finally, Equation (4.9) is obtained using Equation (4.8) and the fact that ${p}_{\infty}$ is solution of (4.2). □
due to (4.10). Collecting all terms leads to the desired inequality. □
Let us finally remark that as a usual consequence of the General Relative Entropy principle (GRE) [29], the estimate (4.4) follows by choosing the convex function $G(x)={(x{C}^{0})}_{+}^{4}$. This shows that the bound (4.4) can be proved using (4.3) together with a wellposedness theory of classical fastdecaying at −∞ solutions to (4.1).
5 Numerical results
We consider two different explicit methods to simulate the NNLIF (1.4). The first one is based on standard shockcapturing methods for the advection term and standard centered finite differences for the secondorder term. More precisely, the first order term is approximated by finite difference WENOschemes [30].
Then, the ChangCooper method performs a kind of θfinite difference approximation of $p/M$, see [31] for details. The ChangCooper method presents difficulties when the firing rate becomes large and the diffusion coefficient $a(N)$ is constant. More precisely, given $a(N)={a}_{0}$ and $b>0$, if N is large, the drift of the Maxwellian, in terms of which is rewritten the FokkerPlanck equation, practically vanishes on the interval $(\infty ,{V}_{F}]$ and this particular ChangCooper method is not suitable. Whenever $a(N)$ is not constant, this problem disappears.
Summarizing, we consider two different schemes for our simulations: the first one is based on WENOfinite differences as described above, and the second one by means of the cited ChangCooper method. In both cases the evolution on time is performed with a TVD RungeKutta scheme. In Section 2 of [20] these schemes are explained in details and we refer to [30, 31] for a deeper analysis of them.
In our simulations we consider a uniform mesh in v, for $v\in [{V}_{\mathit{min}},{V}_{F}]$. The value ${V}_{\mathit{min}}$ (less than ${V}_{R}$) is adjusted in the numerical experiments to fulfill that $p({V}_{\mathit{min}},t)\approx 0$, while ${V}_{F}$ is fixed to 2 and ${V}_{R}=1$. As initial data we have taken two different types of functions:

Maxwellians:${p}^{0}(v)=\frac{1}{\sqrt{2\pi}{\sigma}_{0}}{e}^{\frac{{(v{v}_{0})}^{2}}{2{\sigma}_{0}^{2}}},$(5.1)
where the mean ${v}_{0}$ and the variance ${\sigma}_{0}^{2}$ are chosen according to the analyzed phenomenon.

Stationary Profiles (3.2) given by$p(v)=\frac{N}{a(N)}{e}^{\frac{{(v{V}_{0}(N))}^{2}}{2a(N)}}{\int}_{max(v,{V}_{R})}^{{V}_{F}}{e}^{\frac{{(w{V}_{0}(N))}^{2}}{2a(N)}}\phantom{\rule{0.2em}{0ex}}dw,$
with N an approximate value of the stationary firing rate. We typically consider this kind of initial data to analyze local stability of steady states.
Steady states.  As we show in Section 3, for b positive there is a range of values for which there are either one or two or no steady states. With our simulations we can observe all the cases represented in Figures 1 and 2.
For $b=1.5$, we proved in Section 3 that there are two steady states. With our simulations we can conjecture that the steady state with larger firing rate is unstable. However the stationary solution with low firing rate is locally asymptotically stable. We illustrate this situation in the bottom left subplot in Figure 4. Starting with a firing rate close to the high stationary firing value, the solution tends to the low firing stationary value.
In the bottom right subplot of Figure 4 we observe the evolution for a negative value of b, where we know that there is always a unique steady state, and its local asymptotic stability seems clear from the numerical experiments.
The number of steady states is related with wellknown neuronal phenomena, for instance, asynchronous behavior, when there exists only one stationary solution. Let us mention that bi and multistable networks have been used to describe binocular rivalry in visual perception [24] and the process of decision making [25]. Our simulations show that the simple NNLIF model (1.4) describes networks with only one steady states and several stationary states, in terms of the connectivity parameter b.
Therefore, the blowup in the value of the firing rate appears even if the derivative of p at the firing voltage does not diverge. This kind of behavior could be interpreted as a synchronization of a part of the network, since the firing rate tends to be a Dirac Delta.
6 Conclusion
The nonlinear noisy leaky integrate and fire (NNLIF) model is a standard FokkerPlanck equation describing spiking events in neuron networks. It was observed numerically in various places, but never stated as such, that a blowup phenomena can occur in finite time. We have described a class of situations where we can prove that this happens. Remarkably, the system can blowup for all connectivity parameter $b>0$, whatever is the (stabilizing) noise.
The nature of this blowup is not mathematically proved. Nevertheless, our estimates in Lemma 2.3 indicate that it should not come from a vanishing behaviour for $v\approx \infty $, or a lack of fast decay rate because the second moment in v is controlled uniformly in blowup situations. Additionally, numerical evidence is that the firing rate $N(t)$ blowsup in finite time whenever a singularity in the system occurs. This scenario is compatible with all our theoretical knowledge on the NNLIF and in particular with ${L}^{1}$ estimates on the total network activity (firing rate $N(t)$). Further understanding of the nature of blowup behavior, and possible continuation, is a challenging mathematical issue. This blowup phenomenon could be related to synchronization of the network therefore an interpretation in terms of neurophysiology would be interesting.
On the other hand, we have established that the set of steady states can be empty, a single state or two states depending on the network connectivity. These are all compatible with blowup profile, and when they exist, numerics can exhibit convergence to a steady state, that is, to an asynchronous state for the network. Besides better understanding of the blowup phenomena, several questions are left open; is it possible to have triple or more steady states? Which of them are stable? Can a bifurcation analysis help to understand and compute the set of steady states?
Appendix
To conclude the proof, it remains to choose the functions ϕ and ψ such that, ${A}_{1},{A}_{2}<\infty $. Using L’Hôpital’s rule and, after some tedious but easy computations and calculus arguments, we obtain that in the case at hand, $m(x)=n(x)=Kmin(x,{e}^{{x}^{2}})$, we can take $\varphi (x)=\psi (x)=1+\sqrt{x}$.
Declarations
Acknowledgements
The first two authors acknowledge support from the project MTM200806349C0303 DGIMCI (Spain). MJC acknowledges support from the P08FQM04267 from Junta de Andalucía (Spain). JAC acknowledges support from the 2009SGR345 from AGAURGeneralitat de Catalunya. BP has been supported by the ANR project MANDy, Mathematical Analysis of Neuronal Dynamics, ANR09BLAN000801. The three authors thank the CRMBarcelona and Isaac Newton Institute where this work was started and completed respectively. The article processing charges have been paid by the projects MTM200806349C0303 DGIMCI and P08FQM04267 from Junta de Andalucía (Spain).
Authors’ Affiliations
References
 Lapicque L: Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarisation. J. Physiol. Pathol. Gen. 1907, 9: 620–635.Google Scholar
 Tuckwell H: Introduction to Theoretical Neurobiology. Cambridge Univ. Press, Cambridge; 1988.View ArticleGoogle Scholar
 Brunel N, Hakim V: Fast global oscillations in networks of integrateandfire neurons with long fiting rates. Neural Comput. 1999, 11: 1621–1671. 10.1162/089976699300016179View ArticleGoogle Scholar
 Brunel N: Dynamics of sparsely connected networks of excitatory and inhibitory spiking networks. J. Comput. Neurosci. 2000, 8: 183–208. 10.1023/A:1008925309027View ArticleGoogle Scholar
 Renart, A., Brunel, N., Wang, X.J.: Meanfield theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. In: Feng, J. (ed.) Computational Neuroscience: A Comprehensive Approach. Mathematical Biology and Medicine Series. Chapman & Hall/CRC (2004)Google Scholar
 Compte A, Brunel N, GoldmanRakic PS, Wang XJ: Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb. Cortex 2000, 10: 910–923. 10.1093/cercor/10.9.910View ArticleGoogle Scholar
 Sirovich L, Omurtag A, Lubliner K: Dynamics of neural populations: stability and synchrony. Network 2006, 17: 3–29. 10.1080/09548980500421154View ArticleGoogle Scholar
 Omurtag A, Knight BW, Sirovich L: On the simulation of large populations of neurons. J. Comput. Neurosci. 2000, 8: 51–63. 10.1023/A:1008964915724View ArticleGoogle Scholar
 Mattia M, Del Giudice P: Population dynamics of interacting spiking neurons. Phys. Rev. E 2002., 66:Google Scholar
 Guillamon T: An introduction to the mathematics of neural activity. Butl. Soc. Catalana Mat. 2004, 19: 25–45.Google Scholar
 Risken H: The FokkerPlanck Equation: Methods of solution and applications. SpringerVerlag, Berlin; 1989.View ArticleGoogle Scholar
 Burkholder, D.L., Pardoux, É., Sznitman, A.: École d’Été de Probabilités de SaintFlour XIX—1989. In: Hennequin, P. L. (ed.) Lecture Notes in Mathematics, vol. 1464. SpringerVerlag, Berlin (1991). Papers from the school held in SaintFlour, August 16–September 2, 1989
 Bolley, F., Cañizo, J.A., Carrillo, J.A.: Stochastic meanfield limit: nonLipschitz forces & swarming. Math. Mod. Meth. Appl. Sci. (2011, in press) Bolley, F., Cañizo, J.A., Carrillo, J.A.: Stochastic meanfield limit: nonLipschitz forces & swarming. Math. Mod. Meth. Appl. Sci. (2011, in press)
 Newhall K, Kovačič G, Kramer P, Rangan AV, Cai D: Cascadeinduced synchrony in stochasticallydriven neuronal networks. Phys. Rev. E 2010., 82:Google Scholar
 Newhall K, Kovačič G, Kramer P, Zhou D, Rangan A, Cai D: Dynamics of currentbased, poisson driven, integrateandfire neuronal networks. Commun. Math. Sci. 2010, 8: 541–600.MathSciNetView ArticleGoogle Scholar
 Gerstner W, Kistler W: Spiking Neuron Models. Press Cambridge, Cambridge Univ.; 2002.View ArticleGoogle Scholar
 Brette R, Gerstner W: Adaptive exponential integrateandfire model as an effective description of neural activity. J. Neurophysiol. 2005, 94: 3637–3642. 10.1152/jn.00686.2005View ArticleGoogle Scholar
 Touboul J: Bifurcation analysis of a general class of nonlinear integrateandfire neurons. SIAM J. Appl. Math. 2008, 68: 1045–1079. 10.1137/070687268MathSciNetView ArticleGoogle Scholar
 Touboul J: Importance of the cutoff value in the quadratic adaptive integrateandfire model. Neural Comput. 2009, 21: 2114–2122. 10.1162/neco.2009.0908853MathSciNetView ArticleGoogle Scholar
 Cáceres MJ, Carrillo JA, Tao L: A numerical solver for a nonlinear FokkerPlanck equation representation of neuronal network dynamics. J. Comput. Phys. 2011, 230: 1084–1099. 10.1016/j.jcp.2010.10.027MathSciNetView ArticleGoogle Scholar
 Pham J, Pakdaman K, Champaguaf C, Vivert JF: Activity in sparsely connected excitatory neural networks: effect of connectivity. Neural Netw. 1998, 11: 415–434. 10.1016/S08936080(97)001536View ArticleGoogle Scholar
 Pakdaman K, Perthame B, Salort D: Dynamics of a structured neuron population. Nonlinearity 2010, 23: 55–75. 10.1088/09517715/23/1/003MathSciNetView ArticleGoogle Scholar
 Gray CM, Singer W: Stimulusspecific neuronal oscillations in orientation columns of cat visual cortex. Proc. Natl. Acad. Sci. USA 1989, 86: 1698–1702. 10.1073/pnas.86.5.1698View ArticleGoogle Scholar
 MorenoBote R, Rinzel J, Rubin N: Noiseinduced alternations in an attractor network model of perceptual bistability. J. Neurophysiol. 2007, 98: 1125–1139. 10.1152/jn.00116.2007View ArticleGoogle Scholar
 Albantakis L, Deco G: The encoding of alternatives in multiplechoice decision making. Proc. Natl. Acad. Sci. USA 2009, 106: 10308–10313. 10.1073/pnas.0901621106View ArticleGoogle Scholar
 Blanchet A, Dolbeault J, Perthame B: Twodimensional KellerSegel model: optimal critical mass and qualitative properties of the solutions. Electron. J. Differ. Equ. 2006, 44: 1–33. (electronic) (electronic)MathSciNetGoogle Scholar
 Corrias L, Perthame B, Zaag H: Global solutions of some chemotaxis and angiogenesis systems in high space dimensions. Milan J. Math. 2004, 72: 1–28. 10.1007/s000320030026xMathSciNetView ArticleGoogle Scholar
 González MdM, Gualdani MP: Asymptotics for a symmetric equation in price formation. Appl. Math. Optim. 2009, 59: 233–246. 10.1007/s002450089052yMathSciNetView ArticleGoogle Scholar
 Michel P, Mischler S, Perthame B: General relative entropy inequality: an illustration on growth models. J. Math. Pures Appl. 2005, 84: 1235–1260. 10.1016/j.matpur.2005.04.001MathSciNetView ArticleGoogle Scholar
 Shu, C.W.: Essentially nonoscillatory and weighted esentially nonoscillatory schemes for hyperbolic conservation laws. In: Cockburn, B., Johnson, C., Shu, C.W., Tadmor, E., Quarteroni, A. (eds.) Advanced Numerical Approximation of Nonlinear Hyperbolic Equations, vol. 1697, pp. 325–432. Springer (1998)View ArticleGoogle Scholar
 Buet C, Cordier S, Dos Santos V: A conservative and entropy scheme for a simplified model of granular media. Transp. Theory Stat. Phys. 2004, 33: 125–155. 10.1081/TT120037804MathSciNetView ArticleGoogle Scholar
 Ledoux M: The Concentration of Measure Phenomenon. 2001.Google Scholar
 Barthe F, Roberto C: Modified logarithmic Sobolev inequalities on . Potential Anal. 2008, 29: 167–193. 10.1007/s1111800890935MathSciNetView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.