 Research
 Open Access
 Published:
Analysis of nonlinear noisy integrate & fire neuron models: blowup and steady states
The Journal of Mathematical Neuroscience volume 1, Article number: 7 (2011)
Abstract
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks can be written as FokkerPlanckKolmogorov equations on the probability density of neurons, the main parameters in the model being the connectivity of the network and the noise. We analyse several aspects of the NNLIF model: the number of steady states, a priori estimates, blowup issues and convergence toward equilibrium in the linear case. In particular, for excitatory networks, blowup always occurs for initial data concentrated close to the firing potential. These results show how critical is the balance between noise and excitatory/inhibitory interactions to the connectivity parameter.
AMS Subject Classification:35K60, 82C31, 92B20.
1 Introduction
The classical description of the dynamics of a large set of neurons is based on deterministic/stochastic differential systems for the excitatoryinhibitory neuron network [1, 2]. One of the most classical models is the socalled noisy leaky integrate and fire (NLIF) model. Here, the dynamical behavior of the ensemble of neurons is encoded in a stochastic differential equation for the evolution in time of membrane potential v(t) of a typical neuron representative of the network. The neurons relax towards their resting potential {V}_{L} in the absence of any interaction. All the interactions of the neuron with the network are modelled by an incoming synaptic current I(t). More precisely, the evolution of the membrane potential follows, see [3–8]
where {C}_{m} is the capacitance of the membrane and {g}_{L} is the leak conductance, normally taken to be constants with {\tau}_{m}={g}_{L}/{C}_{m}\approx 2\text{ms} being the typical relaxation time of the potential towards the leak reversal (resting) potential {V}_{L}\approx 70\text{mV}. Here, the synaptic current takes the form of a stochastic process given by:
where δ is the Dirac Delta at 0. Here, {J}_{E} and {J}_{I} are the strength of the synapses, {C}_{E} and {C}_{I} are the total number of presynaptic neurons and {t}_{Ej}^{i} and {t}_{Ij}^{i} are the times of the {j}^{\mathrm{th}}spike coming from the {i}^{\mathrm{th}}presynaptic neuron for excitatory and inhibitory neurons respectively. The stochastic character is embedded in the distribution of the spike times of neurons. Actually, each neuron is assumed to spike according to a stationary Poisson process with constant probability of emitting a spike per unit time ν. Moreover, all these processes are assumed to be independent between neurons. With these assumptions the average value of the current and its variance are given by {\mu}_{C}=b\nu with b={C}_{E}{J}_{E}{C}_{I}{J}_{I} and {\sigma}_{C}^{2}=({C}_{E}{J}_{E}^{2}+{C}_{I}{J}_{I}^{2})\nu. We will say that the network is averageexcitatory (averageinhibitory resp.) if b>0 (b<0 resp.).
Being the discrete Poisson processes still very difficult to analyze, many authors in the literature [3–5, 7–9] have adopted the diffusion approximation where the synaptic current is approximated by a continuous in time stochastic process of OrnsteinUhlenbeck type with the same mean and variance as the Poissonian spiketrain process. More precisely, we approximate I(t) in (1.2) as
where {B}_{t} is the standard Brownian motion, that is, {B}_{t} are independent Gaussian processes of zero mean and unit standard deviation. We refer to the work [5] for a nice review and discussion of the diffusion approximation which becomes exact in the infinitely large network limit, if the synaptic efficacies {J}_{E} and {J}_{I} are scaled appropriately with the network sizes {C}_{E} and {C}_{I}.
Finally, another important ingredient in the modelling comes from the fact that neurons only fire when their voltage reaches certain threshold value called the threshold or firing voltage {V}_{F}\approx 50\text{mV}. Once this voltage is attained, they discharge themselves, sending a spike signal over the network. We assume that they instantaneously relax toward a reset value of the voltage {V}_{R}\approx 60\text{mV}. This is fundamental for the interactions with the network that may help increase their membrane potential up to the maximum level (excitatory synapses), or decrease it for inhibitory synapses. Choosing our voltage and time units in such a way that {C}_{m}={g}_{L}=1, we can summarize our approximation to the stochastic differential equation model (1.1) as the evolution given by
for V\le {V}_{F} with the jump process: V({t}_{o}^{+})={V}_{R} whenever at {t}_{0} the voltage achieves the threshold value V({t}_{o}^{})={V}_{F}; with {V}_{L}<{V}_{R}<{V}_{F}. Finally, we have to specify the probability of firing per unit time of the Poissonian spike train ν. This is the socalled firing rate and it should be selfconsistently computed from a fully coupled network together with some external stimuli. Therefore, the firing rate is computed as \nu ={\nu}_{\mathit{ext}}+N(t), see [5] for instance, where N(t) is the mean firing rate of the network. The value of N(t) is then computed as the flux of neurons across the threshold or firing voltage {V}_{F}. We finally refer to [10] for a nice brief introduction to this subject.
Coming back to the diffusion approximation in (1.3), we can write a partial differential equation for the evolution of the probability density p(v,t)\ge 0 of finding neurons at a voltage v\in (\infty ,{V}_{F}] at a time t\ge 0. A heuristic argument using Ito’s rule [3–5, 7–9, 11] gives the backward Kolmogorov or FokkerPlanck equation with sources
with h(v,N(t))=v+{V}_{L}+{\mu}_{c} and a(N)={\sigma}_{C}^{2}/2. We have the presence of a source term in the righthand side due to all neurons that at time t\ge 0 fired, sent the signal on the network and then, their voltage was immediately reset to voltage {V}_{R}. Moreover, no neuron should have the firing voltage due to the instantaneous discharge of the neurons to reset value {V}_{R}, then we complement (1.4) with Dirichlet and initial boundary conditions
Equation (1.4) should be the evolution of a probability density, therefore
for all t\ge 0. Formally, this conservation should come from integrating (1.4) and using the boundary conditions (1.5). It is straightforward to check that this conservation for smooth solutions is equivalent to characterize the mean firing rate for the network N(t) as the flux of neurons at the firing rate voltage. More precisely, the mean firing rate N(t) is implicitly given by
Here, the righthand side is nonnegative since p\ge 0 over the interval [\infty ,{V}_{F}] and thus, \frac{\partial p}{\partial v}({V}_{F},t)\le 0. In particular this imposes a limitation on the growth of the function N\mapsto a(N) such that (1.6) has a unique solution N. Let us mention that a rigorous passage from the stochastic differential equation with jump processes (1.3) to the nonlinear equation (1.4)(1.6) is a very interesting issue but outside the scope of this paper, see related results in which nonlinearities are nonlocal functionals in [12, 13].
The above FokkerPlanck equation has been widely used in neurosciences. Often the authors prefer to write it in an equivalent but less singular form. To avoid the Dirac delta in the right hand side, one can also set the same equation on (\infty ,{V}_{R})\cup ({V}_{R},{V}_{F}] and introduce the jump condition
This is completely transparent in our analysis which relates on a weak form that applies to both settings.
Finally, let us choose a new voltage variable by translating it with the factor {V}_{L}+b{\nu}_{\mathit{ext}} while, for the sake of clarity, keeping the notation for the rest of values of the potentials involved {V}_{R}<{V}_{F}. In these new variables, the drift and diffusion coefficients are of the form
where b>0 for excitatoryaverage networks and b<0 for inhibitoryaverage networks, {a}_{0}>0 and {a}_{1}\ge 0. Some results in this work can be obtained for some more general drift and diffusion coefficients. The precise assumptions will be specified on each result. Periodic solutions have been numerically reported and analysed in the case of the FokkerPlanck equation for uncoupled neurons in [14, 15]. Also, they study the stationary solutions for fully coupled networks obtaining and solving numerically the implicit relation that the firing rate N has to satisfy, see Section 3 for more details.
There are several other routes towards modeling of spiking neurons that are related to ours and that have been used in neurosciences, see [16]. Among them are the deterministic I&F models with adaptation which are known for fitting well experimental data [17]. General models of this type were unified and studied in terms of neuronal behaviors in [18]. In this case it is known that in the quadratic (or merely superlinear) case, the model can lead to blowup [19] in the absence of a fixed threshold. We point out that the nature of this blowup is completely different from the one discussed in this paper. One can also introduce gating variables in neuron networks and this leads to a kinetic equation, see [20] and the references therein. Another method consists in coding the information in the distribution of time elapsed between discharges [21, 22], this leads to nonlinear models that exhibit naturally periodic activity and blowup cannot happen. Nonlinear IF models are able to produce different patterns of activity and excitability types while linear models do not.
In this work we will analyse certain properties of the solutions to (1.4)(1.5) with the nonlinear term due to the coupling of the mean firing rate given by (1.6). Next section is devoted to a finite time blowup of weak solutions for (1.4)(1.6). In short, we show that whenever the value of b>0 is, we can find suitable initial data concentrated enough at the firing rate such that the defined weak solutions do not exist for all times. We remark that, in the same sense that Brunel in [4], we use the term asynchronous for network states for which the firing rate tends asymptotically to constant in time, while we denote by synchronous those for which this does not happen. Therefore, a possible interpretation of the blowup is that synchronization occurs in the model since the firing rate diverges for a fixed time creating possibly a strong partial synchronization, that is, a part of the network firing at the same time. Although one could also consider the blowup as an artifact of these solutions, since neurons firing arbitrarily fast is not biologically plausible. As long as the solution exists in the sense specified in Section 2, we can get a priori estimates on the {L}_{\mathit{loc}}^{1}norm of the firing rate. Section 3 deals with the stationary states of (1.4)(1.6). We can show that there are unique stationary states for b\le 0 and a constant but for b>0 different cases may happen: one, two or no stationary states depending on how large b is. In Section 4, we discuss the linear problem b=0 with a constant for which the general relative entropy principle applies implying the exponential convergence towards equilibrium. Finally by means of numerical simulations, in Section 5 we illustrate the results of previous sections about blowup and steady states. Moreover, this numerical analysis allows us to conjecture about nonlinear stability properties of the stationary states: in case of only one steady state it is asymptotically stable and in case of two different stationary solutions the results show that the one with lower firing rate is locally asymptotically stable while the one with higher stationary firing value is either unstable or with a very small region of attraction. Our results and simulations describe situations which can be identified with neuronal phenomena such as synchronization/asynchronization of a network and bistability networks. Bi and multistable networks have been used, for instance in models of visual perception and decision making [23–25]. Our analysis in Sections 23 and 5 imply that this simple model encodes complicated dynamics, in the sense that, only in terms of the connectivity parameter b, very different situations can be described with this model: blowup, no steady state, only one steady state and several stationary states.
2 Finite time blowup and a priori estimates for weak solutions
Since we study a nonlinear version of the backward Kolmogorov or FokkerPlanck equation (1.4), we start with the notion of solution:
Definition 2.1 We say that a pair of nonnegative functions(p,N)withp\in {L}^{\infty}({\mathbb{R}}^{+};{L}_{+}^{1}(\infty ,{V}_{F})), N\in {L}_{\mathit{loc},+}^{1}({\mathbb{R}}^{+})is a weak solution of (1.4)(1.7) if for any test function\varphi (v,t)\in {C}^{\infty}((\infty ,{V}_{F}]\times [0,T])such that\frac{{\partial}^{2}\varphi}{\partial {v}^{2}}, v\frac{\partial \varphi}{\partial v}\in {L}^{\infty}((\infty ,{V}_{F})\times (0,T)), we have
Here, the notation {L}^{p}(\Omega ), 1\le p<\infty, refers to the space of functions such that {f}^{p} is integrable in Ω, while {L}^{\infty} corresponds to the space of bounded functions in Ω. The set of infinitely differentiable functions in Ω is denoted by {C}^{\infty}(\Omega ) used as test functions in the notion of weak solution. These nonnegativity assumptions are reasonable. Indeed, for a given N(t), if we were to replace N by {N}^{+} in the right hand side of (1.4), we obtain a linear equation which solution is nonnegative and (1.6) gives N\ge 0; that this fixed point may work is a more involved issue, since we prove that there are not always global solutions, which requires functional spaces and this motives the a priori estimates that we derive at the end of this section.
Let us remark that the growth condition on the test function together with the assumption (1.7) imply that the term involving h(v,N) makes sense. By choosing test functions of the form \psi (t)\varphi (v), this formulation is equivalent to say that for all \varphi (v)\in {C}^{\infty}((\infty ,{V}_{F}]) such that v\frac{\partial \varphi}{\partial v}\in {L}^{\infty}((\infty ,{V}_{F})), we have that
holds in the distributional sense. It is trivial to check that weak solutions conserve the mass of the initial data by choosing \varphi =1 in (2.2), and thus,
The first result we show is that globalintime weak solutions of (1.4)(1.6) do not exist for all initial data in the case of an averageexcitatory network. This result holds with less stringent hypotheses on the coefficients than in (1.7) with an analogous notion of weak solution as in Definition 2.1.
Theorem 2.2 (Blowup)
Assume that the drift and diffusion coefficients satisfy
for all\infty <v\le {V}_{F}and allN\ge 0, and let us consider the averageexcitatory network whereb>0. Choose\mu >max(\frac{{V}_{F}}{{a}_{m}},\frac{1}{b}). If the initial data is concentrated enough aroundv={V}_{F}, in the sense that
is close enough to{e}^{\mu {V}_{F}}, then there are no globalintime weak solutions to (1.4)(1.6).
Proof We choose a multiplier \varphi (v)={e}^{\mu v} with \mu >0 and define the number
by hypotheses. For a weak solution according to (2.1), we find from (2.2) that
where (2.4) and the fact that v\in (\infty ,{V}_{F}) was used. Let us now choose μ large enough such that \mu {a}_{m}{V}_{F}>0 according to our hypotheses and denote
which satisfies
If initially {M}_{\mu}(0)\ge \lambda and using Gronwall’s Lemma since N(t)\ge 0, we have that {M}_{\mu}(t)\ge \lambda, for all t\ge 0, and back to (2.5) we find
which in turn implies,
On the other hand, since p(v,t) is a probability density, see (2.3), and \mu >0 then
leading to a contradiction.
It remains to show that the set of initial data satisfying the size condition {M}_{\mu}(0)\ge \lambda is not empty. To verify this, we can approximate as much as we want by smooth initial probability densities an initial Dirac mass at {V}_{F} which gives the condition
This can be equivalently written as
Choosing \mu >max(\frac{1}{b},\frac{{V}_{F}}{{a}_{m}}), these conditions are obviously fulfilled. □
As usual for this type of blowup result similar in spirit to the classical KellerSegel model for chemotaxis [26, 27], the proof only ensures that solutions for those initial data do not exist beyond a finite maximal time of existence. It does not characterize the nature of the first singularity which occurs. It implies that either the decay at infinity is false, although not probable, implying that the time evolution of probability densities ceases to be tight, or the function N(t) may become a singular measure in finite time instead of being an {L}_{\mathit{loc}}^{1}({\mathbb{R}}^{+}) function. Actually, in the numerical computations shown in Section 5, we observe a blowup in the value of the mean firing rate in finite time. To continue the solution will need a modification of the notion of solution introduced in Definition 2.1. It would be useful, since the firing rate does not become constant in time and consequently a possible interpretation is that synchronization occurs, a phenomena that is of interest to neuroscientists, see also the comments in the introduction and the conclusions.
Although in this paper the nature of blowup is not mathematically identified, we devote the rest of the section to prove some a priori estimates which shed some light on this direction. To be more precise, our estimates indicate that this blowup should not come from a loss of mass at v\approx \infty, or a lack of fast decay rate because the second moment in v is controlled uniformly in blowup situations. We obtain these a priori bounds with the help of appropriate choices of the test function ϕ in (2.1). Some of these choices are not allowed due to the growth at −∞ of the test functions. We will say that a weak solution is fastdecaying at −∞ if they are weak solutions in the sense of Definition 2.1 and the weak formulation in (2.2) holds for all test functions growing algebraically in v.
Lemma 2.3 (A priori estimates)
Assume (1.7) on the drift and diffusion coefficients and that(p,N)is a globalintime solution of (1.4)(1.6) in the sense of Definition 2.1 fast decaying at −∞, then the following a priori estimates hold for allT>0:

(i)
Ifb\ge {V}_{F}{V}_{R}, then
\begin{array}{rcl}{\int}_{\infty}^{{V}_{F}}({V}_{F}v)p(v,t)\phantom{\rule{0.2em}{0ex}}dv& \le & max({V}_{F},{\int}_{\infty}^{{V}_{F}}({V}_{F}v){p}^{0}(v)\phantom{\rule{0.2em}{0ex}}dv),\\ (b{V}_{F}+{V}_{R}){\int}_{0}^{T}N(t)\phantom{\rule{0.2em}{0ex}}dt& \le & {V}_{F}T+{\int}_{\infty}^{{V}_{F}}({V}_{F}v){p}^{0}(v)\phantom{\rule{0.2em}{0ex}}dv.\end{array} 
(ii)
If b<{V}_{F}{V}_{R} then
{\int}_{\infty}^{{V}_{F}}({V}_{F}v)p(v,t)\phantom{\rule{0.2em}{0ex}}dv\ge min({V}_{F},{\int}_{\infty}^{{V}_{F}}({V}_{F}v){p}^{0}(v)\phantom{\rule{0.2em}{0ex}}dv).
Moreover, if in addition a is constant then
Proof Using (1.7) together with our decay assumption at −∞, we may use the test function \varphi (v)={V}_{F}v\ge 0. Then (2.2) gives
This is also written as
To prove (i), we notice that with our condition on b, the term in N(t) is nonpositive and the first result follows from Gronwall’s inequality. The second result just follows after integration in time.
To prove (ii), we first use again (2.7) and, because the term in N(t) is nonnegative, we find the first result. To obtain the second estimate (2.6), given \u03f5\in (0,({V}_{F}{V}_{R})/2), we can always choose a smooth truncation function {\varphi}_{\u03f5}(v)\in {C}^{2} such that
with {\varphi}_{\u03f5}^{\u2033}(v)=0 outside the interval ({V}_{R},{V}_{R}+\u03f5) such that
and thus {\varphi}_{\u03f5}^{\u2033}\in {L}^{\infty}(\infty ,{V}_{F}) with size of order of {\u03f5}^{1}. In other words, we have chosen a {C}^{2} uniform approximation of the truncation {(v{V}_{R})}_{+}/({V}_{F}{V}_{R}) with {x}_{+}=max(x,0) obtained by integrating twice a smooth suitable approximation of the \delta (V{V}_{R})/({V}_{F}{V}_{R}). Then, equation (2.2) gives
Since {\varphi}_{\u03f5}^{\prime}(v) is positive and nondecreasing and using (2.8), we get for all 0<\tilde{\epsilon}<1 there exists {\u03f5}_{0} small enough, such that for all 0<\u03f5<{\u03f5}_{0}:
Due to the hypotheses b<{V}_{F}{V}_{R}, for any γ such that 0<\gamma <1b/({V}_{F}{V}_{R}), we can find \tilde{\epsilon} small enough such that
Taking into account (2.8), (2.9), and that p(v,t) is a probability density, we have for 0<\u03f5<{\u03f5}_{0} small enough
Choosing now \u03f5={\u03f5}_{0}/2 for instance, integration in time of the last inequality leads to the desired inequality (2.6). □
Corollary 2.4 Under the assumptions of Lemma 2.3 and assuming{v}^{2}{p}^{0}(v)\in {L}^{1}(\infty ,{V}_{F})and0<b<{V}_{F}{V}_{R}, then the following a priori estimates hold:

(i)
If additionally a is constant, for allt\ge 0we have
{\int}_{\infty}^{{V}_{F}}{v}^{2}p(v,t)\phantom{\rule{0.2em}{0ex}}dv\le C(1+t). 
(ii)
If additionallybmin({V}_{F},{\int}_{\infty}^{{V}_{F}}({V}_{F}v){p}^{0}(v)\phantom{\rule{0.2em}{0ex}}dv)+{a}_{1}+b{V}_{F}+\frac{{V}_{R}^{2}{V}_{F}^{2}}{2}\le 0, then
{\int}_{\infty}^{{V}_{F}}{v}^{2}p(v,t)\phantom{\rule{0.2em}{0ex}}dv\le max({a}_{0},{\int}_{\infty}^{{V}_{F}}{v}^{2}{p}^{0}(v,t)\phantom{\rule{0.2em}{0ex}}dv).
Proof We use again the weak formulation (2.1) with \varphi (v)={v}^{2}/2 as test function and get
thanks to the first statement of Lemma 2.3(ii).
To prove (i), we just use the second statement of Lemma 2.3(ii) valid for a constant which tells us that the time integration of the righthand side grows at most linearly in time and so does {\int}_{\infty}^{{V}_{F}}{v}^{2}p(v,t)\phantom{\rule{0.2em}{0ex}}dv.
To prove (ii), we just use that the bracket is nonpositive and the results follows. □
3 Steady states
3.1 Generalities
This section is devoted to find all smooth stationary solutions of the problem (1.4)(1.6) in the particular relevant case of a drift of the form h(v)={V}_{0}(N)v. Let us search for continuous stationary solutions p of (1.4) such that p is {C}^{1} regular except possibly at V={V}_{R} where it is Lipschitz. Using the definition in (2.2), we are then allowed by a direct integration by parts in the second derivative term of p to deduce that p satisfies
in the sense of distributions, with H being the Heaviside function, that is, H(u)=1 for u\ge 0 and H(u)=0 for u<0. Therefore, we conclude that
The definition of N in (1.6) and the Dirichlet boundary condition (1.5) imply C=0 by evaluating this expression at v={V}_{F}. Using again the boundary condition (1.5), p({V}_{F})=0, we may finally integrate again and find that
which can be rewritten, using the expression of the Heaviside function, as
Moreover, the firing rate in the stationary state N is determined by the normalization condition (2.3), or equivalently,
Summarizing, all solutions p of the stationary problem (3.1), with the above referred regularity, are of the form given by the expression (3.2), where N is any positive solution of the implicit equation (3.3).
Let us first comment that in the linear case {V}_{0}(N)=0 and a(N)={a}_{0}>0, we then get a unique stationary state {p}_{\infty} given by the Dawson function
with {N}_{\infty} the normalizing constant to unit mass over the interval (\infty ,{V}_{F}].
The rest of this section is devoted to find conditions on the parameters of the model clarifying the number of solutions to (3.3). With this aim, it is convenient to perform a change of variables, and use new notations
where the N dependency has been avoided to simplify notation. Then, as in [3], we can rewrite the previous integral (and thus the condition for a steady state) as
Another alternative form of I(N) follows from the change of variables s=(zu)/2 and \tilde{s}=(z+u)/2 to get
and consequently,
3.2 Case of a(N)={a}_{0}
We are now ready to state our main result on steady states.
Theorem 3.1 Assumeh(v,N)=bNv, a(N)={a}_{0}is constant and{V}_{0}=bN.

(i)
Forb<0andb>0small enough there is a unique steady state to (1.4)(1.6).

(ii)
Under either the condition
0<b<{V}_{F}{V}_{R},(3.8)
or the condition
then there exists at least one steady state solution to (1.4)(1.6).

(iii)
If both (3.9) andb>{V}_{F}{V}_{R}hold, then there are at least two steady states to (1.4)(1.6).

(iv)
There is no steady state to (1.4)(1.6) under the high connectivity condition
b>max(2({V}_{F}{V}_{R}),2{V}_{F}I(0)).(3.10)
Remark 3.2 It is natural to relate the absence of steady state for b large with blowup of solutions. However, Theorem 2.2 in Section 2shows this is not the only possible cause since the blowup can happen for initial data concentrated enough around{V}_{F}independently of the value ofb>0. See also Section 5for related numerical results.
Proof Let us first study properties of the function I(N). To do that, we rewrite (3.7) as
Taking the function f(s)={e}^{\frac{s{V}_{F}}{\sqrt{{a}_{0}}}}{e}^{\frac{s{V}_{R}}{\sqrt{{a}_{0}}}} and Taylor expanding up to second order at s=0, we get f(s)f(0){f}^{\prime}(0)s={f}^{\u2033}(\theta ){s}^{2}/2 with f(0)=0, {f}^{\prime}(0)=({V}_{F}{V}_{R})/\sqrt{{a}_{0}}, and \theta \in (0,s). It is easy to see that
for all \theta \in (0,s). By distinguishing the cases based on the signs of {V}_{F} and {V}_{R}, this Taylor expansion implies that
for all s\ge 0. Then, a direct application of the dominated convergence theorem and continuity theorems of integrals with respect to parameters show that the function I(N) is continuous on N on [0,\infty ). Moreover, the function I(N) is {C}^{\infty} on N since all their derivatives can be computed by differentiating under the integral sign by direct application of dominated convergence theorems and differentiation theorems of integrals with respect to parameters. In particular,
and for all integers k\ge 1,
As a consequence, we deduce:

1.
Case b<0: I(N) is an increasing strictly convex function and thus
\underset{N\to \infty}{lim}I(N)=\infty . 
2.
Case b>0: I(N) is a decreasing convex function. Also, it is obvious from the previous expansion (3.11) and dominated convergence theorem that
\underset{N\to \infty}{lim}I(N)=0.
It is also useful to keep in mind that, thanks to the form of I(N) in (3.6),
Now, let us show that for b>0, we have
Using (3.11), we deduce
A direct application of dominated convergence theorem shows that the right hand side converges to 0 as N\to \infty since sNexp(\frac{sbN}{\sqrt{{a}_{0}}}) is a bounded function uniform in N and s. Thus, the computation of the limit is reduced to show
With this aim, we rewrite the integral in terms of the complementary error function defined as
and then
Finally, we can obtain the limit (3.14) using L’Hôpital’s rule
With this analysis of the function I(N) we can now proof each of the statements of Theorem 3.1:
Proof of (i) Let us start with the case b<0. Here, the function I(N) is increasing, starting at I(0)<\infty due to (3.12) and such that
Therefore, it crosses to the function 1/N at a single point.
Now, for the case b>0 small, we first remark that similar dominated convergence arguments as above show that both I(N) and {I}^{\prime}(N) are smooth functions of b. Moreover, it is simple to realize that I(N) is a decreasing function of the parameter b. Now, choosing 0<b\le {b}_{\ast}<({V}_{F}{V}_{R})/2, then I(N)\ge {I}_{\ast}(N) for all N\ge 0 where {I}_{\ast}(N) denotes the function associated to the parameter {b}_{\ast}. Using the limit (3.13), we can now infer the existence of {N}_{\ast}>0 depending only on {b}_{\ast} such that
for all N\ge {N}_{\ast}. Therefore, by continuity of NI(N) there are solutions to NI(N)=1 and all possible crossings of I(N) and 1/N are on the interval [0,{N}_{\ast}]. We observe that both I(N) and {I}^{\prime}(N) converge towards the constant function I(0)>0 and to 0 respectively, uniformly in the interval [0,{N}_{\ast}] as b\to 0. Therefore, for b small NI(N) is strictly increasing on the interval [0,{N}_{\ast}] and there is a unique solution to NI(N)=1. □
Proof of (ii)
Case of (3.8) The claim that there are solutions to NI(N)=1 for 0<b<{V}_{F}{V}_{R} is a direct consequence of the continuity of I(N), (3.12) and (3.13).
Case of (3.9) We are going to prove that I(N)\ge 1/N for \frac{2{a}_{0}}{{({V}_{R}{V}_{F})}^{2}}<N<\frac{{V}_{R}}{b}, which concludes the existence of a steady state since I(0)<\infty due to (3.12) implies that I(N)<1/N for small N. Condition (3.9) only asserts that this interval for N is not empty. To do so, we show that
which obviously concludes the desired inequality I(N)\ge 1/N for the interval of N under consideration.
The condition \frac{{V}_{R}}{b}>N is equivalent to {w}_{R}>0, therefore, using (3.5) and the expression for I(N) in (3.6), we deduce
Since z>0 and {e}^{\frac{{u}^{2}}{2}} is an increasing function for u>0, then {e}^{\frac{{u}^{2}}{2}}\ge {e}^{\frac{{z}^{2}}{2}} on [z,{w}_{F}], and we conclude
Proof of (iii) Under the condition (3.9), we have shown in the previous point the existence of an interval where I(N)>I/N. On one hand, I(0)<\infty in (3.12) implies that I(N)<I/N for N small and the condition b>{V}_{F}{V}_{R} implies that I(N)<I/N for N large enough due to the limit (3.13), thus there are at least two crossings between I(N) and 1/N. □
Proof of (iv) Under assumption (3.10) for b, it is easy to check that the following inequalities hold
and
We consider N such that N>{V}_{F}/b, this means that {w}_{F}<0. We use the formula (3.7) for I(N) and write the inequalities
where the meanvalue theorem and {w}_{F}<0 were used. Then, we conclude that
Therefore, using Inequality (3.16):
and due to the fact that I is decreasing and Inequality (3.15), we have I(N)<I(0)<1/N, for N\le 2{V}_{F}/b. In this way, we have shown that for all N, I(N)<1/N and consequently there is no steady state. □
Remark 3.3 The functionsI(N)and1/Nare depicted in Figure 1for the case{V}_{0}(N)=bNanda(N)={a}_{0}illustrating the main result: steady states exist for small b and do not exist for large b while there is an intermediate range of existence of two stationary states. The numerical plots of the functionNI(N)might indicate that there are only three possibilities: one stationary state, two stationary states and no stationary state. However, we are not able to prove or disprove the uniqueness of a maximum for the functionNI(N)eventually giving this sharp result.
Remark 3.4 The condition (3.9) is not optimal and it can be improved by using one more term in the series expansion of the exponentials inside the integral of the expression ofI(N)in (3.7). More precisely, if{w}_{F}>{w}_{R}>0, we use
In this way, we get
since
Then, condition (3.9) can be improved to
Of course, this last inequality is not optimal either for the same reason as before.
3.3 Case of a(N)={a}_{0}+{a}_{1}N
We now treat the case of a(N)={a}_{0}+{a}_{1}N, with {a}_{0},{a}_{1}>0 with b>0. Proceeding as above we can obtain from (3.7) the expression of its derivative
where
Therefore I(N) is decreasing since
Moreover, we can check that the computation of the limit (3.13) still holds. Actually, we have
where we have used the change \alpha =\frac{N}{\sqrt{2a}} and L’Hôpital’s rule. In the case b<0, we can observe again by the same proof as before that I(N)\to \infty when N\to \infty, and thus, by continuity there is at least one solution to NI(N)=1. Nevertheless, it seems difficult to clarify perfectly the number of solutions due to the competing monotone functions in (3.17).
The generalization of part of Theorem 3.1 is contained in the following result. We will skip its proof since it essentially follows the same steps as before with the new ingredients just mentioned.
Corollary 3.5 Assumeh(v,N)=bNv, a(N)={a}_{0}+{a}_{1}Nwith{a}_{0},{a}_{1}>0.

(i)
Under either the conditionb<{V}_{F}{V}_{R}, or the conditionsb>0and2{a}_{0}b+2{a}_{1}{V}_{R}<{({V}_{F}{V}_{R})}^{2}{V}_{R}, then there exists at least one steady state solution to (1.4)(1.6).

(ii)
If both2{a}_{0}b+2{a}_{1}{V}_{R}<{({V}_{F}{V}_{R})}^{2}{V}_{R}andb>{V}_{F}{V}_{R}hold, then there are at least two steady states to (1.4)(1.6).

(iii)
There is no steady state to (1.4)(1.6) forb>max(2({V}_{F}{V}_{R}),2{V}_{F}I(0)).
These behaviours are depicted in Figure 2. Let us point out that if a is linear and b<0, I(N) has not to be strictly increasing as in the constant diffusion case and it may have a minimum for N>0.
At this point, a natural question is what happens with the stability of these steady states. In the next section we study it in the linear case, when the model presents only one steady state. An extension of the same techniques, entropy methods, to the nonlinear case is not straightforward at all. However, the results obtained in the linear case let us expect that for small connectivity parameter b the only steady state could be stable. On the other hand, numerical results presented in Section 5 give some numerical evidence of the stability/instability in different situations described by this model: only one steady state or two steady states, see that section for details.
4 Linear equation and relaxation
We study specifically the case of a linear equation that is b=0 and a(N)={a}_{0}, that is,
For later purposes, we remind that the steady state {p}_{\infty}(v) given in (3.4) satisfies, for the case at hand, the equation
We will assume in this section that the initial data satisfies, for some {C}^{0}>0
Then, we take for granted that solutions of the linear problem exist, with the regularity needed in each result below and such that for all t\ge 0
The estimate (4.4) follows a posteriori from the relative entropy inequality that we state below, see more comments at the end of this section. This is an indication that the hypothesis for the initial data (4.3) will easily be propagated in time giving (4.4) with a wellposedness theory of classical fastdecaying solutions at hand. These solutions to (4.1) and (4.2) might be obtained by the method developed in [28] and will be analysed elsewhere.
We prove that the solutions to (4.1) converge in large times to the unique steady state {p}_{\infty}(v).
Theorem 4.1 (Exponential decay)
Fastdecaying solutions to the equation (4.1) verifying (4.4) satisfy
This result shows that no synchronization of neuronal activity can be expected when the network is not connected, since solutions tend to produce a constant firing rate, a very intuitive conclusion. Because the rate of decay is exponential, we also expect that small connectivity cannot create synchronization either, again an intuitive conclusion proved rigorously for the elapsed time structured model in [22]. Also the proof shows that two relaxation processes are involved in this effect: dissipation by the diffusion term and dissipation by the firing term. These relaxation effects are stated in the following theorem which also gives the natural bounds for the solutions to equation (4.1) (choosing G(u)={u}^{2} gives the natural energy space of the system, a weighted {L}^{2} space).
Theorem 4.2 (Relative entropy inequality)
Fastdecaying solutions to equation (4.1) verifying (4.4) satisfy, for any smooth convex functionG:{\mathbb{R}}^{+}\u27f6\mathbb{R}, the inequality
with
Proof of Theorem 4.1 The proof is standard in FokkerPlanck theory and follows by applying the relative entropy inequality (4.5) with G(x)={(x1)}^{2}. Then, we obtain
To proceed further, we need an additional technical ingredient that we state and whose proof is postponed to an Appendix.
Proposition 4.3 There exists \nu >0 such that
for all functions q such\frac{q}{{p}_{\infty}}\in {H}^{1}({p}_{\infty}(v)\phantom{\rule{0.2em}{0ex}}dv)and{\int}_{\infty}^{{V}_{F}}q(v)\phantom{\rule{0.2em}{0ex}}dv=0.
Poincaré’slike inequality in Proposition 4.3 applied to q=p{p}_{\infty} bounds the right hand side on (4.6)
Finally, the Gronwall lemma directly gives the result. □
To show Theorem 4.2, which was used in the proof of Theorem 4.1, we need the following preliminary computations.
Lemma 4.4 Given p a fastdecaying solution of (4.1) verifying (4.4), {p}_{\infty}given by (3.4) and G a{C}^{2}convex function, then the following relations hold:
Proof Since \frac{\partial}{\partial v}(\frac{p}{{p}_{\infty}})=\frac{1}{{p}_{\infty}}\frac{\partial p}{\partial v}\frac{p}{{p}_{\infty}^{2}}\frac{\partial {p}_{\infty}}{\partial v} we obtain
and
Using these two expressions in
we obtain (4.7).
Equation (4.8) is a consequence of Equation (4.7) and the following expressions for the partial derivatives of G(\frac{p}{{p}_{\infty}}):
and
Finally, Equation (4.9) is obtained using Equation (4.8) and the fact that {p}_{\infty} is solution of (4.2). □
Proof of Theorem 4.2 We integrate from −∞ to {V}_{F}\alpha in (4.9) and let α tend to 0^{+} and use L’Hôpital’s rule
Since p(v,t)\le {C}^{0}{p}_{\infty}(v), then
The Dirichlet boundary condition (1.5) implies that
where we used that
due to (4.10). Collecting all terms leads to the desired inequality. □
Let us finally remark that as a usual consequence of the General Relative Entropy principle (GRE) [29], the estimate (4.4) follows by choosing the convex function G(x)={(x{C}^{0})}_{+}^{4}. This shows that the bound (4.4) can be proved using (4.3) together with a wellposedness theory of classical fastdecaying at −∞ solutions to (4.1).
5 Numerical results
We consider two different explicit methods to simulate the NNLIF (1.4). The first one is based on standard shockcapturing methods for the advection term and standard centered finite differences for the secondorder term. More precisely, the first order term is approximated by finite difference WENOschemes [30].
The second numerical method is based on another finite difference scheme for the FokkerPlanck equation proposed in the literature called the ChangCooper method [31]. This method was also used in [20] for a computational neuroscience model with variable voltage and conductance. In order to use this method, the first step is to rewrite the FokkerPlanck equation (1.4) in terms of the Maxwellian M(v)={e}^{\frac{{(vbN)}^{2}}{2a(N)}} as follows,
Then, the ChangCooper method performs a kind of θfinite difference approximation of p/M, see [31] for details. The ChangCooper method presents difficulties when the firing rate becomes large and the diffusion coefficient a(N) is constant. More precisely, given a(N)={a}_{0} and b>0, if N is large, the drift of the Maxwellian, in terms of which is rewritten the FokkerPlanck equation, practically vanishes on the interval (\infty ,{V}_{F}] and this particular ChangCooper method is not suitable. Whenever a(N) is not constant, this problem disappears.
Summarizing, we consider two different schemes for our simulations: the first one is based on WENOfinite differences as described above, and the second one by means of the cited ChangCooper method. In both cases the evolution on time is performed with a TVD RungeKutta scheme. In Section 2 of [20] these schemes are explained in details and we refer to [30, 31] for a deeper analysis of them.
In our simulations we consider a uniform mesh in v, for v\in [{V}_{\mathit{min}},{V}_{F}]. The value {V}_{\mathit{min}} (less than {V}_{R}) is adjusted in the numerical experiments to fulfill that p({V}_{\mathit{min}},t)\approx 0, while {V}_{F} is fixed to 2 and {V}_{R}=1. As initial data we have taken two different types of functions:

Maxwellians:
{p}^{0}(v)=\frac{1}{\sqrt{2\pi}{\sigma}_{0}}{e}^{\frac{{(v{v}_{0})}^{2}}{2{\sigma}_{0}^{2}}},(5.1)
where the mean {v}_{0} and the variance {\sigma}_{0}^{2} are chosen according to the analyzed phenomenon.

Stationary Profiles (3.2) given by
p(v)=\frac{N}{a(N)}{e}^{\frac{{(v{V}_{0}(N))}^{2}}{2a(N)}}{\int}_{max(v,{V}_{R})}^{{V}_{F}}{e}^{\frac{{(w{V}_{0}(N))}^{2}}{2a(N)}}\phantom{\rule{0.2em}{0ex}}dw,
with N an approximate value of the stationary firing rate. We typically consider this kind of initial data to analyze local stability of steady states.
Steady states.  As we show in Section 3, for b positive there is a range of values for which there are either one or two or no steady states. With our simulations we can observe all the cases represented in Figures 1 and 2.
In Figure 3 we show the time evolution of the distribution function p(v,t), in the case of a\equiv 1 and b=0.5 for which there is only one steady state according to Theorem 3.1, considering as initial data a Maxwellian with {v}_{0}=0 and {\sigma}_{0}^{2}=0.25 in (5.1). We observe that the solution after 3.5 time units numerically achieves the steady state with the imposed tolerance. The top left subplot in Figure 4 describes the time evolution of the firing rate, which becomes constant after some time. This clearly corresponds to the case of a unique locally asymptotically stable stationary state. Let us remark that in the right subplot of Figure 3, we can observe the Lipschitz behavior of the function at {V}_{R} as it should be from the jump in the flux and thus on the derivative of the solutions and the stationary states, see Section 3.
For b=1.5, we proved in Section 3 that there are two steady states. With our simulations we can conjecture that the steady state with larger firing rate is unstable. However the stationary solution with low firing rate is locally asymptotically stable. We illustrate this situation in the bottom left subplot in Figure 4. Starting with a firing rate close to the high stationary firing value, the solution tends to the low firing stationary value.
In Figure 5 we analyze in more details the behavior of the steady state with larger firing rate. The left subplot presents the evolution on time of the firing rate for different distribution function starting with profiles given by the expression (3.2) with N an approximate value of the stationary firing rate. We show that, depending of the initial firing rate considered, its behavior is different: tends to the lower steady state or goes to infinity. The firing rate for the solution with initial {N}_{0}=2.31901 remains almost constant for a period of time. Observe in Figure 5 that the difference between the initial data and the distribution function at time t=1.8 is almost negligible. However, the system evolves slowly and at t=6 the distribution is very close to the the lower steady state, see the bottom left subplot in Figure 4.
In the bottom right subplot of Figure 4 we observe the evolution for a negative value of b, where we know that there is always a unique steady state, and its local asymptotic stability seems clear from the numerical experiments.
The number of steady states is related with wellknown neuronal phenomena, for instance, asynchronous behavior, when there exists only one stationary solution. Let us mention that bi and multistable networks have been used to describe binocular rivalry in visual perception [24] and the process of decision making [25]. Our simulations show that the simple NNLIF model (1.4) describes networks with only one steady states and several stationary states, in terms of the connectivity parameter b.
No steady states.  The results in Section 3 indicate that there are no steady states for b=3. In Figure 6 we observe the evolution on time of the distribution function p for this choice of the connectivity parameter b. In Figure 4 (right top) we show the time evolution of the firing rate, which seems to blow up in finite time. We observe how the distribution function becomes more and more picked at {V}_{R} and {V}_{F} producing an increasing value of the firing rate. The synchronization is a phenomenon that is of interest to neuroscientists, in these figures we do not observe it, but they do not show desynchronization either, since there are not steady states and no convergence of the firing rate. Therefore it could be possible to find periodic solutions which describe this phenomenon allowing for the formation of point Diracs in the firing rate. In this way, it could be related with the situation that we observe in Figures 7 and 8, where blowup is analyzed, since the firing rate looks like tends to be a Dirac Delta.
Blow up.  According to our blowup Theorem 2.2, the blowup in finite time of the solution happens for any value of b>0 if the initial data is concentrated enough on the firing rate. In Figures 7 and 8, we show the evolution on time of the firing rate with an initial data with mass concentrated close to {V}_{F} for values of b in which there are either a unique or two stationary states. The firing rate increases without bound up to the computing time. It seems that the blowup condition in Theorem 2.2 is not as restrictive as to say that the initial data is close to a Dirac Delta at {V}_{F}, as it is showed in Figure 7, where the initial condition is far from Dirac Delta at {V}_{F}. Let us finally mention that blowup appears numerically also in case of a(N)={a}_{0}+{a}_{1}N, but here the blowup scenario is characterized by a breakup of the condition under which (1.6) has a unique solution N, that is,
Therefore, the blowup in the value of the firing rate appears even if the derivative of p at the firing voltage does not diverge. This kind of behavior could be interpreted as a synchronization of a part of the network, since the firing rate tends to be a Dirac Delta.
6 Conclusion
The nonlinear noisy leaky integrate and fire (NNLIF) model is a standard FokkerPlanck equation describing spiking events in neuron networks. It was observed numerically in various places, but never stated as such, that a blowup phenomena can occur in finite time. We have described a class of situations where we can prove that this happens. Remarkably, the system can blowup for all connectivity parameter b>0, whatever is the (stabilizing) noise.
The nature of this blowup is not mathematically proved. Nevertheless, our estimates in Lemma 2.3 indicate that it should not come from a vanishing behaviour for v\approx \infty, or a lack of fast decay rate because the second moment in v is controlled uniformly in blowup situations. Additionally, numerical evidence is that the firing rate N(t) blowsup in finite time whenever a singularity in the system occurs. This scenario is compatible with all our theoretical knowledge on the NNLIF and in particular with {L}^{1} estimates on the total network activity (firing rate N(t)). Further understanding of the nature of blowup behavior, and possible continuation, is a challenging mathematical issue. This blowup phenomenon could be related to synchronization of the network therefore an interpretation in terms of neurophysiology would be interesting.
On the other hand, we have established that the set of steady states can be empty, a single state or two states depending on the network connectivity. These are all compatible with blowup profile, and when they exist, numerics can exhibit convergence to a steady state, that is, to an asynchronous state for the network. Besides better understanding of the blowup phenomena, several questions are left open; is it possible to have triple or more steady states? Which of them are stable? Can a bifurcation analysis help to understand and compute the set of steady states?
Appendix
This appendix is devoted to prove a HardyPoincaré’slike inequality more general that the one stated in Proposition 4.3. Given m,n>0 on (0,\infty ) such that {\int}_{0}^{\infty}m(y)\phantom{\rule{0.2em}{0ex}}dy=1, we want to show that for all functions f on (0,\infty ), such that {\int}_{0}^{\infty}m(y)f(y)\phantom{\rule{0.2em}{0ex}}dy=0, the following inequality holds:
provided all integrals make sense. Proposition 4.3 follows by considering m(x)=n(x)=Kmin(x,{e}^{{x}^{2}/2}), where K is a constant in such a way that {\int}_{0}^{\infty}m(y)\phantom{\rule{0.2em}{0ex}}dy=1 and parameterizing the interval from (\infty ,{V}_{F}] instead of [0,\infty ). Note that {p}_{\infty} is equivalent to the function m both at 0 and at ∞ in terms of asymptotic behavior. We point out that for this kind of functions, m(x)=n(x)=Kmin(x,{e}^{{x}^{2}/2}), the Muckenhoupt’s criterion for Poincare’s inequality or their variants in [32, 33] cannot be used, since 1/n(x) is not integrable at zero. However, the inequality (7.1) holds provided that {\int}_{0}^{\infty}m(y)f(y)\phantom{\rule{0.2em}{0ex}}dy=0. The main ingredient in the proof of (7.1) is to write
where we used {\int}_{0}^{\infty}m(y)f(y)\phantom{\rule{0.2em}{0ex}}dy=0 and {\int}_{0}^{\infty}m(y)\phantom{\rule{0.2em}{0ex}}dy=1. In this way we can consider two functions, to be chosen later, \varphi ,\psi >0 to obtain
Therefore, we have
where, using the notation, M(x)={\int}_{0}^{x}m(y)\phantom{\rule{0.2em}{0ex}}dy
and
To conclude the proof we analyse both terms. Considering Fubini’s Theorem we obtain
with
and
with
To conclude the proof, it remains to choose the functions ϕ and ψ such that, {A}_{1},{A}_{2}<\infty. Using L’Hôpital’s rule and, after some tedious but easy computations and calculus arguments, we obtain that in the case at hand, m(x)=n(x)=Kmin(x,{e}^{{x}^{2}}), we can take \varphi (x)=\psi (x)=1+\sqrt{x}.
References
 1.
Lapicque L: Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarisation. J. Physiol. Pathol. Gen. 1907, 9: 620–635.
 2.
Tuckwell H: Introduction to Theoretical Neurobiology. Cambridge Univ. Press, Cambridge; 1988.
 3.
Brunel N, Hakim V: Fast global oscillations in networks of integrateandfire neurons with long fiting rates. Neural Comput. 1999, 11: 1621–1671. 10.1162/089976699300016179
 4.
Brunel N: Dynamics of sparsely connected networks of excitatory and inhibitory spiking networks. J. Comput. Neurosci. 2000, 8: 183–208. 10.1023/A:1008925309027
 5.
Renart, A., Brunel, N., Wang, X.J.: Meanfield theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. In: Feng, J. (ed.) Computational Neuroscience: A Comprehensive Approach. Mathematical Biology and Medicine Series. Chapman & Hall/CRC (2004)
 6.
Compte A, Brunel N, GoldmanRakic PS, Wang XJ: Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb. Cortex 2000, 10: 910–923. 10.1093/cercor/10.9.910
 7.
Sirovich L, Omurtag A, Lubliner K: Dynamics of neural populations: stability and synchrony. Network 2006, 17: 3–29. 10.1080/09548980500421154
 8.
Omurtag A, Knight BW, Sirovich L: On the simulation of large populations of neurons. J. Comput. Neurosci. 2000, 8: 51–63. 10.1023/A:1008964915724
 9.
Mattia M, Del Giudice P: Population dynamics of interacting spiking neurons. Phys. Rev. E 2002., 66:
 10.
Guillamon T: An introduction to the mathematics of neural activity. Butl. Soc. Catalana Mat. 2004, 19: 25–45.
 11.
Risken H: The FokkerPlanck Equation: Methods of solution and applications. SpringerVerlag, Berlin; 1989.
 12.
Burkholder, D.L., Pardoux, É., Sznitman, A.: École d’Été de Probabilités de SaintFlour XIX—1989. In: Hennequin, P. L. (ed.) Lecture Notes in Mathematics, vol. 1464. SpringerVerlag, Berlin (1991). Papers from the school held in SaintFlour, August 16–September 2, 1989
 13.
Bolley, F., Cañizo, J.A., Carrillo, J.A.: Stochastic meanfield limit: nonLipschitz forces & swarming. Math. Mod. Meth. Appl. Sci. (2011, in press) Bolley, F., Cañizo, J.A., Carrillo, J.A.: Stochastic meanfield limit: nonLipschitz forces & swarming. Math. Mod. Meth. Appl. Sci. (2011, in press)
 14.
Newhall K, Kovačič G, Kramer P, Rangan AV, Cai D: Cascadeinduced synchrony in stochasticallydriven neuronal networks. Phys. Rev. E 2010., 82:
 15.
Newhall K, Kovačič G, Kramer P, Zhou D, Rangan A, Cai D: Dynamics of currentbased, poisson driven, integrateandfire neuronal networks. Commun. Math. Sci. 2010, 8: 541–600.
 16.
Gerstner W, Kistler W: Spiking Neuron Models. Press Cambridge, Cambridge Univ.; 2002.
 17.
Brette R, Gerstner W: Adaptive exponential integrateandfire model as an effective description of neural activity. J. Neurophysiol. 2005, 94: 3637–3642. 10.1152/jn.00686.2005
 18.
Touboul J: Bifurcation analysis of a general class of nonlinear integrateandfire neurons. SIAM J. Appl. Math. 2008, 68: 1045–1079. 10.1137/070687268
 19.
Touboul J: Importance of the cutoff value in the quadratic adaptive integrateandfire model. Neural Comput. 2009, 21: 2114–2122. 10.1162/neco.2009.0908853
 20.
Cáceres MJ, Carrillo JA, Tao L: A numerical solver for a nonlinear FokkerPlanck equation representation of neuronal network dynamics. J. Comput. Phys. 2011, 230: 1084–1099. 10.1016/j.jcp.2010.10.027
 21.
Pham J, Pakdaman K, Champaguaf C, Vivert JF: Activity in sparsely connected excitatory neural networks: effect of connectivity. Neural Netw. 1998, 11: 415–434. 10.1016/S08936080(97)001536
 22.
Pakdaman K, Perthame B, Salort D: Dynamics of a structured neuron population. Nonlinearity 2010, 23: 55–75. 10.1088/09517715/23/1/003
 23.
Gray CM, Singer W: Stimulusspecific neuronal oscillations in orientation columns of cat visual cortex. Proc. Natl. Acad. Sci. USA 1989, 86: 1698–1702. 10.1073/pnas.86.5.1698
 24.
MorenoBote R, Rinzel J, Rubin N: Noiseinduced alternations in an attractor network model of perceptual bistability. J. Neurophysiol. 2007, 98: 1125–1139. 10.1152/jn.00116.2007
 25.
Albantakis L, Deco G: The encoding of alternatives in multiplechoice decision making. Proc. Natl. Acad. Sci. USA 2009, 106: 10308–10313. 10.1073/pnas.0901621106
 26.
Blanchet A, Dolbeault J, Perthame B: Twodimensional KellerSegel model: optimal critical mass and qualitative properties of the solutions. Electron. J. Differ. Equ. 2006, 44: 1–33. (electronic) (electronic)
 27.
Corrias L, Perthame B, Zaag H: Global solutions of some chemotaxis and angiogenesis systems in high space dimensions. Milan J. Math. 2004, 72: 1–28. 10.1007/s000320030026x
 28.
González MdM, Gualdani MP: Asymptotics for a symmetric equation in price formation. Appl. Math. Optim. 2009, 59: 233–246. 10.1007/s002450089052y
 29.
Michel P, Mischler S, Perthame B: General relative entropy inequality: an illustration on growth models. J. Math. Pures Appl. 2005, 84: 1235–1260. 10.1016/j.matpur.2005.04.001
 30.
Shu, C.W.: Essentially nonoscillatory and weighted esentially nonoscillatory schemes for hyperbolic conservation laws. In: Cockburn, B., Johnson, C., Shu, C.W., Tadmor, E., Quarteroni, A. (eds.) Advanced Numerical Approximation of Nonlinear Hyperbolic Equations, vol. 1697, pp. 325–432. Springer (1998)
 31.
Buet C, Cordier S, Dos Santos V: A conservative and entropy scheme for a simplified model of granular media. Transp. Theory Stat. Phys. 2004, 33: 125–155. 10.1081/TT120037804
 32.
Ledoux M: The Concentration of Measure Phenomenon. 2001.
 33.
Barthe F, Roberto C: Modified logarithmic Sobolev inequalities on . Potential Anal. 2008, 29: 167–193. 10.1007/s1111800890935
Acknowledgements
The first two authors acknowledge support from the project MTM200806349C0303 DGIMCI (Spain). MJC acknowledges support from the P08FQM04267 from Junta de Andalucía (Spain). JAC acknowledges support from the 2009SGR345 from AGAURGeneralitat de Catalunya. BP has been supported by the ANR project MANDy, Mathematical Analysis of Neuronal Dynamics, ANR09BLAN000801. The three authors thank the CRMBarcelona and Isaac Newton Institute where this work was started and completed respectively. The article processing charges have been paid by the projects MTM200806349C0303 DGIMCI and P08FQM04267 from Junta de Andalucía (Spain).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Cáceres, M.J., Carrillo, J.A. & Perthame, B. Analysis of nonlinear noisy integrate & fire neuron models: blowup and steady states. J. Math. Neurosc. 1, 7 (2011). https://doi.org/10.1186/2190856717
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/2190856717
Keywords
 Leaky integrate and fire models
 noise
 blowup
 relaxation to steady state
 neural networks