Analysis of nonlinear noisy integrate & fire neuron models: blow-up and steady states

Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks can be written as Fokker-Planck-Kolmogorov equations on the probability density of neurons, the main parameters in the model being the connectivity of the network and the noise. We analyse several aspects of the NNLIF model: the number of steady states, a priori estimates, blow-up issues and convergence toward equilibrium in the linear case. In particular, for excitatory networks, blow-up always occurs for initial data concentrated close to the firing potential. These results show how critical is the balance between noise and excitatory/inhibitory interactions to the connectivity parameter. AMS Subject Classification: 35K60, 82C31, 92B20.


Introduction
The classical description of the dynamics of a large set of neurons is based on deterministic/stochastic differential systems for the excitatory-inhibitory neuron network [1,2]. One of the most classical models is the so-called noisy leaky integrate and fire (NLIF) model. Here, the dynamical behavior of the ensemble of neurons is encoded in a stochastic differential equation for the evolution in time of membrane potential v(t) of a typical neuron representative of the network. The neurons relax towards their resting potential V L in the absence of any interaction. All the interactions of the neuron with the network are modelled by an incoming synaptic current I (t). More precisely, the evolution of the membrane potential follows, see [3][4][5][6][7][8] where C m is the capacitance of the membrane and g L is the leak conductance, normally taken to be constants with τ m = g L /C m ≈ 2 ms being the typical relaxation time of the potential towards the leak reversal (resting) potential V L ≈ −70 mV. Here, the synaptic current takes the form of a stochastic process given by: where δ is the Dirac Delta at 0. Here, J E and J I are the strength of the synapses, C E and C I are the total number of presynaptic neurons and t i Ej and t i Ij are the times of the j th -spike coming from the i th -presynaptic neuron for excitatory and inhibitory neurons respectively. The stochastic character is embedded in the distribution of the spike times of neurons. Actually, each neuron is assumed to spike according to a stationary Poisson process with constant probability of emitting a spike per unit time ν. Moreover, all these processes are assumed to be independent between neurons. With these assumptions the average value of the current and its variance are given by μ C = bν with b = C E J E − C I J I and σ 2 C = (C E J 2 E + C I J 2 I )ν. We will say that the network is average-excitatory (average-inhibitory resp.) if b > 0 (b < 0 resp.).
Being the discrete Poisson processes still very difficult to analyze, many authors in the literature [3][4][5][7][8][9] have adopted the diffusion approximation where the synaptic current is approximated by a continuous in time stochastic process of Ornstein-Uhlenbeck type with the same mean and variance as the Poissonian spike-train process. More precisely, we approximate I (t) in (1.2) as where B t is the standard Brownian motion, that is, B t are independent Gaussian processes of zero mean and unit standard deviation. We refer to the work [5] for a nice review and discussion of the diffusion approximation which becomes exact in the infinitely large network limit, if the synaptic efficacies J E and J I are scaled appropriately with the network sizes C E and C I .
Finally, another important ingredient in the modelling comes from the fact that neurons only fire when their voltage reaches certain threshold value called the threshold or firing voltage V F ≈ −50 mV. Once this voltage is attained, they discharge themselves, sending a spike signal over the network. We assume that they instantaneously relax toward a reset value of the voltage V R ≈ −60 mV. This is fundamental for the interactions with the network that may help increase their membrane potential up to the maximum level (excitatory synapses), or decrease it for inhibitory synapses. Choosing our voltage and time units in such a way that C m = g L = 1, we can summarize our approximation to the stochastic differential equation model (1.1) as the evolution given by for V ≤ V F with the jump process: V (t + o ) = V R whenever at t 0 the voltage achieves the threshold value V (t − o ) = V F ; with V L < V R < V F . Finally, we have to specify the probability of firing per unit time of the Poissonian spike train ν. This is the so-called firing rate and it should be self-consistently computed from a fully coupled network together with some external stimuli. Therefore, the firing rate is computed as ν = ν ext + N(t), see [5] for instance, where N(t) is the mean firing rate of the network. The value of N(t) is then computed as the flux of neurons across the threshold or firing voltage V F . We finally refer to [10] for a nice brief introduction to this subject.
Coming back to the diffusion approximation in (1.3), we can write a partial differential equation for the evolution of the probability density p(v, t) ≥ 0 of finding neurons at a voltage v ∈ (−∞, V F ] at a time t ≥ 0. A heuristic argument using Ito's rule [3-5, 7-9, 11] gives the backward Kolmogorov or Fokker-Planck equation with sources (1.4) with h(v, N (t)) = −v + V L + μ c and a(N) = σ 2 C /2. We have the presence of a source term in the right-hand side due to all neurons that at time t ≥ 0 fired, sent the signal on the network and then, their voltage was immediately reset to voltage V R . Moreover, no neuron should have the firing voltage due to the instantaneous discharge of the neurons to reset value V R , then we complement (1.4) with Dirichlet and initial boundary conditions (1.5) Equation (1.4) should be the evolution of a probability density, therefore for all t ≥ 0. Formally, this conservation should come from integrating (1.4) and using the boundary conditions (1.5). It is straightforward to check that this conservation for smooth solutions is equivalent to characterize the mean firing rate for the network N(t) as the flux of neurons at the firing rate voltage. More precisely, the mean firing rate N(t) is implicitly given by Here, the right-hand side is nonnegative since p ≥ 0 over the interval [−∞, V F ] and thus, ∂p ∂v (V F , t) ≤ 0. In particular this imposes a limitation on the growth of the function N → a(N) such that (1.6) has a unique solution N . Let us mention that a rigorous passage from the stochastic differential equation with jump processes (1.3) to the nonlinear equation (1.4)-(1.6) is a very interesting issue but outside the scope of this paper, see related results in which nonlinearities are nonlocal functionals in [12,13].
The above Fokker-Planck equation has been widely used in neurosciences. Often the authors prefer to write it in an equivalent but less singular form. To avoid the Dirac delta in the right hand side, one can also set the same equation on (−∞, V R ) ∪ (V R , V F ] and introduce the jump condition This is completely transparent in our analysis which relates on a weak form that applies to both settings. Finally, let us choose a new voltage variable by translating it with the factor V L + bν ext while, for the sake of clarity, keeping the notation for the rest of values of the potentials involved V R < V F . In these new variables, the drift and diffusion coefficients are of the form where b > 0 for excitatory-average networks and b < 0 for inhibitory-average networks, a 0 > 0 and a 1 ≥ 0. Some results in this work can be obtained for some more general drift and diffusion coefficients. The precise assumptions will be specified on each result. Periodic solutions have been numerically reported and analysed in the case of the Fokker-Planck equation for uncoupled neurons in [14,15]. Also, they study the stationary solutions for fully coupled networks obtaining and solving numerically the implicit relation that the firing rate N has to satisfy, see Section 3 for more details. There are several other routes towards modeling of spiking neurons that are related to ours and that have been used in neurosciences, see [16]. Among them are the deterministic I&F models with adaptation which are known for fitting well experimental data [17]. General models of this type were unified and studied in terms of neuronal behaviors in [18]. In this case it is known that in the quadratic (or merely superlinear) case, the model can lead to blow-up [19] in the absence of a fixed threshold. We point out that the nature of this blow-up is completely different from the one discussed in this paper. One can also introduce gating variables in neuron networks and this leads to a kinetic equation, see [20] and the references therein. Another method consists in coding the information in the distribution of time elapsed between discharges [21,22], this leads to nonlinear models that exhibit naturally periodic activity and blow-up cannot happen. Nonlinear IF models are able to produce different patterns of activity and excitability types while linear models do not.
In this work we will analyse certain properties of the solutions to (1.4)-(1.5) with the nonlinear term due to the coupling of the mean firing rate given by (1.6). Next section is devoted to a finite time blow-up of weak solutions for (1.4)-(1.6). In short, we show that whenever the value of b > 0 is, we can find suitable initial data concentrated enough at the firing rate such that the defined weak solutions do not exist for all times. We remark that, in the same sense that Brunel in [4], we use the term asynchronous for network states for which the firing rate tends asymptotically to constant in time, while we denote by synchronous those for which this does not happen. Therefore, a possible interpretation of the blow-up is that synchronization occurs in the model since the firing rate diverges for a fixed time creating possibly a strong partial synchronization, that is, a part of the network firing at the same time. Although one could also consider the blow-up as an artifact of these solutions, since neurons firing arbitrarily fast is not biologically plausible. As long as the solution exists in the sense specified in Section 2, we can get a priori estimates on the L 1 loc -norm of the firing rate. Section 3 deals with the stationary states of (1.4)-(1.6). We can show that there are unique stationary states for b ≤ 0 and a constant but for b > 0 different cases may happen: one, two or no stationary states depending on how large b is. In Section 4, we discuss the linear problem b = 0 with a constant for which the general relative entropy principle applies implying the exponential convergence towards equilibrium. Finally by means of numerical simulations, in Section 5 we illustrate the results of previous sections about blow-up and steady states. Moreover, this numerical analysis allows us to conjecture about nonlinear stability properties of the stationary states: in case of only one steady state it is asymptotically stable and in case of two different stationary solutions the results show that the one with lower firing rate is locally asymptotically stable while the one with higher stationary firing value is either unstable or with a very small region of attraction. Our results and simulations describe situations which can be identified with neuronal phenomena such as synchronization/asynchronization of a network and bistability networks. Bi-and multi-stable networks have been used, for instance in models of visual perception and decision making [23][24][25]. Our analysis in Sections 2, 3 and 5 imply that this simple model encodes complicated dynamics, in the sense that, only in terms of the connectivity parameter b, very different situations can be described with this model: blow-up, no steady state, only one steady state and several stationary states.
Here, the notation L p ( ), 1 ≤ p < ∞, refers to the space of functions such that f p is integrable in , while L ∞ corresponds to the space of bounded functions in . The set of infinitely differentiable functions in is denoted by C ∞ ( ) used as test functions in the notion of weak solution. These non-negativity assumptions are reasonable. Indeed, for a given N(t), if we were to replace N by N + in the right hand side of (1.4), we obtain a linear equation which solution is non-negative and (1.6) gives N ≥ 0; that this fixed point may work is a more involved issue, since we prove that there are not always global solutions, which requires functional spaces and this motives the a priori estimates that we derive at the end of this section.
Let us remark that the growth condition on the test function together with the assumption (1.7) imply that the term involving h(v, N ) makes sense. By choosing test functions of the form ψ(t)φ(v), this formulation is equivalent to say that for all holds in the distributional sense. It is trivial to check that weak solutions conserve the mass of the initial data by choosing φ = 1 in (2.2), and thus, 3) The first result we show is that global-in-time weak solutions of (1.4)-(1.6) do not exist for all initial data in the case of an average-excitatory network. This result holds with less stringent hypotheses on the coefficients than in (1.7) with an analogous notion of weak solution as in Definition 2.1.

4)
for all −∞ < v ≤ V F and all N ≥ 0, and let us consider the average-excitatory is close enough to e μV F , then there are no global-in-time weak solutions to (1.4)-(1.6).
Proof We choose a multiplier φ(v) = e μv with μ > 0 and define the number by hypotheses. For a weak solution according to (2.1), we find from (2.2) that where (2.4) and the fact that v ∈ (−∞, V F ) was used. Let us now choose μ large enough such that μa m − V F > 0 according to our hypotheses and denote If initially M μ (0) ≥ λ and using Gronwall's Lemma since N(t) ≥ 0, we have that M μ (t) ≥ λ, for all t ≥ 0, and back to (2.5) we find On the other hand, since p(v, t) is a probability density, see (2.3), and μ > 0 then leading to a contradiction. It remains to show that the set of initial data satisfying the size condition M μ (0) ≥ λ is not empty. To verify this, we can approximate as much as we want by smooth initial probability densities an initial Dirac mass at V F which gives the condition This can be equivalently written as Choosing , these conditions are obviously fulfilled.
As usual for this type of blow-up result similar in spirit to the classical Keller-Segel model for chemotaxis [26,27], the proof only ensures that solutions for those initial data do not exist beyond a finite maximal time of existence. It does not characterize the nature of the first singularity which occurs. It implies that either the decay at infinity is false, although not probable, implying that the time evolution of probability densities ceases to be tight, or the function N(t) may become a singular measure in finite time instead of being an L 1 loc (R + ) function. Actually, in the numerical computations shown in Section 5, we observe a blow-up in the value of the mean firing rate in finite time. To continue the solution will need a modification of the notion of solution introduced in Definition 2.1. It would be useful, since the firing rate does not become constant in time and consequently a possible interpretation is that synchronization occurs, a phenomena that is of interest to neuroscientists, see also the comments in the introduction and the conclusions.
Although in this paper the nature of blow-up is not mathematically identified, we devote the rest of the section to prove some a priori estimates which shed some light on this direction. To be more precise, our estimates indicate that this blow-up should not come from a loss of mass at v ≈ −∞, or a lack of fast decay rate because the second moment in v is controlled uniformly in blow-up situations. We obtain these a priori bounds with the help of appropriate choices of the test function φ in (2.1). Some of these choices are not allowed due to the growth at −∞ of the test functions. We will say that a weak solution is fast-decaying at −∞ if they are weak solutions in the sense of Definition 2.1 and the weak formulation in (2.2) holds for all test functions growing algebraically in v.

Moreover, if in addition a is constant then
Proof Using (1.7) together with our decay assumption at −∞, we may use the test This is also written as (2.7) To prove (i), we notice that with our condition on b, the term in N(t) is nonpositive and the first result follows from Gronwall's inequality. The second result just follows after integration in time.
To prove (ii), we first use again (2.7) and, because the term in N(t) is nonnegative, we find the first result. To obtain the second estimate (2.6), given and thus φ ∈ L ∞ (−∞, V F ) with size of order of −1 . In other words, we have chosen a C 2 uniform approximation of the truncation Since φ (v) is positive and non-decreasing and using (2.8), we get for all 0 <ε < 1 there exists 0 small enough, such that for all 0 < < 0 : we can findε small enough such that Taking into account (2.8), (2.9), and that p(v, t) is a probability density, we have for Choosing now = 0 /2 for instance, integration in time of the last inequality leads to the desired inequality (2.6).

Corollary 2.4 Under the assumptions of Lemma 2.3 and assuming
, then the following a priori estimates hold: Proof We use again the weak formulation (2.1) with φ(v) = v 2 /2 as test function and get d dt To prove (i), we just use the second statement of Lemma 2.3(ii) valid for a constant which tells us that the time integration of the right-hand side grows at most linearly in time and so does To prove (ii), we just use that the bracket is nonpositive and the results follows.  .2), we are then allowed by a direct integration by parts in the second derivative term of p to deduce that p satisfies in the sense of distributions, with H being the Heaviside function, that is, The definition of N in (1.6) and the Dirichlet boundary condition (1.5) imply C = 0 by evaluating this expression at v = V F . Using again the boundary condition (1.5), p(V F ) = 0, we may finally integrate again and find that which can be rewritten, using the expression of the Heaviside function, as Moreover, the firing rate in the stationary state N is determined by the normalization condition (2.3), or equivalently,  4) with N ∞ the normalizing constant to unit mass over the interval (−∞, V F ]. The rest of this section is devoted to find conditions on the parameters of the model clarifying the number of solutions to (3.3). With this aim, it is convenient to perform a change of variables, and use new notations (3.5) where the N dependency has been avoided to simplify notation. Then, as in [3], we can rewrite the previous integral (and thus the condition for a steady state) as  8) or the condition  and θ ∈ (0, s). It is easy to see that for all θ ∈ (0, s). By distinguishing the cases based on the signs of V F and V R , this Taylor expansion implies that for all s ≥ 0. Then, a direct application of the dominated convergence theorem and continuity theorems of integrals with respect to parameters show that the function I (N) is continuous on N on [0, ∞). Moreover, the function I (N) is C ∞ on N since all their derivatives can be computed by differentiating under the integral sign by direct application of dominated convergence theorems and differentiation theorems of integrals with respect to parameters. In particular, and for all integers k ≥ 1, As a consequence, we deduce: It is also useful to keep in mind that, thanks to the form of I (N) in (3.6), Using (3.11), we deduce

ds.
A direct application of dominated convergence theorem shows that the right hand side converges to 0 as N → ∞ since sN exp(− sbN √ a 0 ) is a bounded function uniform in N and s. Thus, the computation of the limit is reduced to show (3.14) With this aim, we rewrite the integral in terms of the complementary error function defined as Finally, we can obtain the limit (3.14) using L'Hôpital's rule With this analysis of the function I (N) we can now proof each of the statements of Theorem 3.1: Proof of (i) Let us start with the case b < 0. Here, the function I (N) is increasing, starting at I (0) < ∞ due to (3.12) and such that

Proof of (ii)
Case of (3.8) The claim that there are solutions to NI (N) = 1 for 0 < b < V F − V R is a direct consequence of the continuity of I (N), (3.12) and (3.13).
Case of (3.9) We are going to prove that I (N) ≥ 1/N for , which concludes the existence of a steady state since I (0) < ∞ due to (3.12) implies that I (N) < 1/N for small N . Condition (3.9) only asserts that this interval for N is not empty. To do so, we show that which obviously concludes the desired inequality I (N) ≥ 1/N for the interval of N under consideration. The condition V R b > N is equivalent to w R > 0, therefore, using (3.5) and the expression for I (N) in (3.6), we deduce Proof of (iii) Under the condition ( Proof of (iv) Under assumption (3.10) for b, it is easy to check that the following inequalities hold and We consider N such that N > V F /b, this means that w F < 0. We use the formula (3.7) for I (N) and write the inequalities where the mean-value theorem and w F < 0 were used. Then, we conclude that Therefore, using Inequality (3.16): and due to the fact that I is decreasing and Inequality (3.15), we have I (N) < I (0) < 1/N , for N ≤ 2V F /b. In this way, we have shown that for all N , I (N) < 1/N and consequently there is no steady state.   (3.7). More precisely, if w F > w R > 0, we use

Remark 3.4 The condition (3.9) is not optimal and it can be improved by using one more term in the series expansion of the exponentials inside the integral of the expression of I (N) in
In this way, we get Then, condition (3.9) can be improved to Of course, this last inequality is not optimal either for the same reason as before.

Case of a(N) = a 0 + a 1 N
We now treat the case of a(N) = a 0 + a 1 N , with a 0 , a 1 > 0 with b > 0. Proceeding as above we can obtain from (3.7) the expression of its derivative Therefore I (N) is decreasing since Moreover, we can check that the computation of the limit (3.13) still holds. Actually, we have where we have used the change α = N √ 2a and L'Hôpital's rule. In the case b < 0, we can observe again by the same proof as before that I (N) → ∞ when N → ∞, and thus, by continuity there is at least one solution to NI (N) = 1. Nevertheless, it seems difficult to clarify perfectly the number of solutions due to the competing monotone functions in (3.17).
The generalization of part of Theorem 3.1 is contained in the following result. We will skip its proof since it essentially follows the same steps as before with the new ingredients just mentioned. h(v, N ) = bN − v, a(N) = a 0 + a 1 N with a 0 , a 1   At this point, a natural question is what happens with the stability of these steady states. In the next section we study it in the linear case, when the model presents only one steady state. An extension of the same techniques, entropy methods, to the nonlinear case is not straightforward at all. However, the results obtained in the linear case let us expect that for small connectivity parameter b the only steady state could be stable. On the other hand, numerical results presented in Section 5 give some numerical evidence of the stability/instability in different situations described by this model: only one steady state or two steady states, see that section for details.

Linear equation and relaxation
We study specifically the case of a linear equation that is b = 0 and a(N) = a 0 , that is, For later purposes, we remind that the steady state p ∞ (v) given in (3.4) satisfies, for the case at hand, the equation We will assume in this section that the initial data satisfies, for some C 0 > 0, Then, we take for granted that solutions of the linear problem exist, with the regularity needed in each result below and such that for all t ≥ 0, The estimate (4.4) follows a posteriori from the relative entropy inequality that we state below, see more comments at the end of this section. This is an indication that the hypothesis for the initial data (4.3) will easily be propagated in time giving (4.4) with a well-posedness theory of classical fast-decaying solutions at hand. These solutions to (4.1) and (4.2) might be obtained by the method developed in [28] and will be analysed elsewhere.
We prove that the solutions to (4.1) converge in large times to the unique steady state p ∞ (v).

Theorem 4.1 (Exponential decay) Fast-decaying solutions to the equation (4.1) verifying (4.4) satisfy
This result shows that no synchronization of neuronal activity can be expected when the network is not connected, since solutions tend to produce a constant firing rate, a very intuitive conclusion. Because the rate of decay is exponential, we also expect that small connectivity cannot create synchronization either, again an intuitive conclusion proved rigorously for the elapsed time structured model in [22]. Also the proof shows that two relaxation processes are involved in this effect: dissipation by the diffusion term and dissipation by the firing term. These relaxation effects are stated in the following theorem which also gives the natural bounds for the solutions to equation (4.1) (choosing G(u) = u 2 gives the natural energy space of the system, a weighted L 2 space).

Theorem 4.2 (Relative entropy inequality) Fast-decaying solutions to equation (4.1) verifying (4.4) satisfy, for any smooth convex function
Proof of Theorem 4.1 The proof is standard in Fokker-Planck theory and follows by applying the relative entropy inequality (4.5) with G(x) = (x − 1) 2 . Then, we obtain (4.6) To proceed further, we need an additional technical ingredient that we state and whose proof is postponed to an Appendix.

Proposition 4.3 There exists ν > 0 such that
Poincaré's-like inequality in Proposition 4.3 applied to q = p − p ∞ bounds the right hand side on (4.6) Finally, the Gronwall lemma directly gives the result.
To show Theorem 4.2, which was used in the proof of Theorem 4.1, we need the following preliminary computations.
where we used that due to (4.10). Collecting all terms leads to the desired inequality.
Let us finally remark that as a usual consequence of the General Relative Entropy principle (GRE) [29], the estimate (4.4) follows by choosing the convex function + . This shows that the bound (4.4) can be proved using (4.3) together with a well-posedness theory of classical fast-decaying at −∞ solutions to (4.1).

Numerical results
We consider two different explicit methods to simulate the NNLIF (1.4). The first one is based on standard shock-capturing methods for the advection term and standard centered finite differences for the second-order term. More precisely, the first order term is approximated by finite difference WENO-schemes [30].
The second numerical method is based on another finite difference scheme for the Fokker-Planck equation proposed in the literature called the Chang-Cooper method [31]. This method was also used in [20] for a computational neuroscience model with variable voltage and conductance. In order to use this method, the first step is to rewrite the Fokker-Planck equation (1.4) in terms of the Maxwellian Then, the Chang-Cooper method performs a kind of θ -finite difference approximation of p/M, see [31] for details. The Chang-Cooper method presents difficulties when the firing rate becomes large and the diffusion coefficient a(N) is constant. More precisely, given a(N) = a 0 and b > 0, if N is large, the drift of the Maxwellian, in terms of which is rewritten the Fokker-Planck equation, practically vanishes on the interval (−∞, V F ] and this particular Chang-Cooper method is not suitable. Whenever a(N) is not constant, this problem disappears. Summarizing, we consider two different schemes for our simulations: the first one is based on WENO-finite differences as described above, and the second one by means of the cited Chang-Cooper method. In both cases the evolution on time is performed with a TVD Runge-Kutta scheme. In Section 2 of [20] these schemes are explained in details and we refer to [30,31] for a deeper analysis of them.
In our simulations we consider a uniform mesh in v, for v ∈ [V min , V F ]. The value V min (less than V R ) is adjusted in the numerical experiments to fulfill that p(V min , t) ≈ 0, while V F is fixed to 2 and V R = 1. As initial data we have taken two different types of functions: • Maxwellians: where the mean v 0 and the variance σ 2 0 are chosen according to the analyzed phenomenon.
• Stationary Profiles (3.2) given by with N an approximate value of the stationary firing rate. We typically consider this kind of initial data to analyze local stability of steady states.
Steady states. -As we show in Section 3, for b positive there is a range of values for which there are either one or two or no steady states. With our simulations we can observe all the cases represented in Figures 1 and 2.
In Figure 3 we show the time evolution of the distribution function p(v, t), in the case of a ≡ 1 and b = 0.5 for which there is only one steady state according to Theorem 3.1, considering as initial data a Maxwellian with v 0 = 0 and σ 2 0 = 0.25 in (5.1). We observe that the solution after 3.5 time units numerically achieves the steady state with the imposed tolerance. The top left subplot in Figure 4 describes the time evolution of the firing rate, which becomes constant after some time. This clearly corresponds to the case of a unique locally asymptotically stable stationary state. Let us remark that in the right subplot of Figure 3, we can observe the Lipschitz behavior of the function at V R as it should be from the jump in the flux and thus on the derivative of the solutions and the stationary states, see Section 3.
For b = 1.5, we proved in Section 3 that there are two steady states. With our simulations we can conjecture that the steady state with larger firing rate is unstable. However the stationary solution with low firing rate is locally asymptotically stable. We illustrate this situation in the bottom left subplot in Figure 4. Starting with a  firing rate close to the high stationary firing value, the solution tends to the low firing stationary value.
In Figure 5 we analyze in more details the behavior of the steady state with larger firing rate. The left subplot presents the evolution on time of the firing rate for different distribution function starting with profiles given by the expression (3.2) with N an approximate value of the stationary firing rate. We show that, depending of the initial firing rate considered, its behavior is different: tends to the lower steady state or goes to infinity. The firing rate for the solution with initial N 0 = 2.31901 remains almost constant for a period of time. Observe in Figure 5 that the difference between the initial data and the distribution function at time t = 1.8 is almost negligible. However, the system evolves slowly and at t = 6 the distribution is very close to the the lower steady state, see the bottom left subplot in Figure 4.
In the bottom right subplot of Figure 4 we observe the evolution for a negative value of b, where we know that there is always a unique steady state, and its local asymptotic stability seems clear from the numerical experiments.
The number of steady states is related with well-known neuronal phenomena, for instance, asynchronous behavior, when there exists only one stationary solution. Let us mention that bi-and multi-stable networks have been used to describe binocular rivalry in visual perception [24] and the process of decision making [25]. Our simulations show that the simple NNLIF model (1.4) describes networks with only one steady states and several stationary states, in terms of the connectivity parameter b.
No steady states. -The results in Section 3 indicate that there are no steady states for b = 3. In Figure 6 we observe the evolution on time of the distribution function p for this choice of the connectivity parameter b. In Figure 4 (right top) we show the time evolution of the firing rate, which seems to blow up in finite time. We observe how the distribution function becomes more and more picked at V R and V F producing an increasing value of the firing rate. The synchronization is a phenomenon that is of interest to neuroscientists, in these figures we do not observe it, but they do not show desynchronization either, since there are not steady states and no convergence of the firing rate. Therefore it could be possible to find periodic solutions which describe this phenomenon allowing for the formation of point Diracs in the firing rate. In this way, it could be related with the situation that we observe in Figures 7 and 8, where blow-up is analyzed, since the firing rate looks like tends to be a Dirac Delta.  Blow up. -According to our blow-up Theorem 2.2, the blow-up in finite time of the solution happens for any value of b > 0 if the initial data is concentrated enough on the firing rate. In Figures 7 and 8, we show the evolution on time of the firing rate with an initial data with mass concentrated close to V F for values of b in which there are either a unique or two stationary states. The firing rate increases without bound up to the computing time. It seems that the blow-up condition in Theorem 2.2 is not as restrictive as to say that the initial data is close to a Dirac Delta at V F , as it is showed in Figure 7, where the initial condition is far from Dirac Delta at V F . Let us finally mention that blow-up appears numerically also in case of a(N) = a 0 + a 1 N , but here the blow-up scenario is characterized by a break-up of the condition under which (1.6) has a unique solution N , that is, Therefore, the blow-up in the value of the firing rate appears even if the derivative of p at the firing voltage does not diverge. This kind of behavior could be interpreted as a synchronization of a part of the network, since the firing rate tends to be a Dirac Delta.

Conclusion
The nonlinear noisy leaky integrate and fire (NNLIF) model is a standard Fokker-Planck equation describing spiking events in neuron networks. It was observed numerically in various places, but never stated as such, that a blow-up phenomena can occur in finite time. We have described a class of situations where we can prove that this happens. Remarkably, the system can blow-up for all connectivity parameter b > 0, whatever is the (stabilizing) noise. The nature of this blow-up is not mathematically proved. Nevertheless, our estimates in Lemma 2.3 indicate that it should not come from a vanishing behaviour for v ≈ −∞, or a lack of fast decay rate because the second moment in v is controlled uniformly in blow-up situations. Additionally, numerical evidence is that the firing rate N(t) blows-up in finite time whenever a singularity in the system occurs. This scenario is compatible with all our theoretical knowledge on the NNLIF and in particular with L 1 estimates on the total network activity (firing rate N(t)). Further understanding of the nature of blow-up behavior, and possible continuation, is a challenging mathematical issue. This blow-up phenomenon could be related to synchronization of the network therefore an interpretation in terms of neurophysiology would be interesting.
On the other hand, we have established that the set of steady states can be empty, a single state or two states depending on the network connectivity. These are all compatible with blow-up profile, and when they exist, numerics can exhibit convergence to a steady state, that is, to an asynchronous state for the network. Besides better understanding of the blow-up phenomena, several questions are left open; is it possible to have triple or more steady states? Which of them are stable? Can a bifurcation analysis help to understand and compute the set of steady states? and parameterizing the interval from (−∞, V F ] instead of [0, ∞). Note that p ∞ is equivalent to the function m both at 0 and at ∞ in terms of asymptotic behavior. We point out that for this kind of functions, m(x) = n(x) = K min(x, e −x 2 /2 ), the Muckenhoupt's criterion for Poincare's inequality or their variants in [32,33] cannot be used, since 1/n(x) is not integrable at zero. However, the inequality (7.1) holds provided that ∞ 0 m(y)f (y) dy = 0. The main ingredient in the proof of (7.1) is to write where we used To conclude the proof, it remains to choose the functions φ and ψ such that, A 1 , A 2 < ∞. Using L'Hôpital's rule and, after some tedious but easy computations and calculus arguments, we obtain that in the case at hand, m(x) = n(x) = K min(x, e −x 2 ), we can take φ(x) = ψ(x) = 1 + √ x.