- Open Access
Analysis of nonlinear noisy integrate & fire neuron models: blow-up and steady states
The Journal of Mathematical Neuroscience volume 1, Article number: 7 (2011)
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks can be written as Fokker-Planck-Kolmogorov equations on the probability density of neurons, the main parameters in the model being the connectivity of the network and the noise. We analyse several aspects of the NNLIF model: the number of steady states, a priori estimates, blow-up issues and convergence toward equilibrium in the linear case. In particular, for excitatory networks, blow-up always occurs for initial data concentrated close to the firing potential. These results show how critical is the balance between noise and excitatory/inhibitory interactions to the connectivity parameter.
AMS Subject Classification:35K60, 82C31, 92B20.
The classical description of the dynamics of a large set of neurons is based on deterministic/stochastic differential systems for the excitatory-inhibitory neuron network [1, 2]. One of the most classical models is the so-called noisy leaky integrate and fire (NLIF) model. Here, the dynamical behavior of the ensemble of neurons is encoded in a stochastic differential equation for the evolution in time of membrane potential of a typical neuron representative of the network. The neurons relax towards their resting potential in the absence of any interaction. All the interactions of the neuron with the network are modelled by an incoming synaptic current . More precisely, the evolution of the membrane potential follows, see [3–8]
where is the capacitance of the membrane and is the leak conductance, normally taken to be constants with being the typical relaxation time of the potential towards the leak reversal (resting) potential . Here, the synaptic current takes the form of a stochastic process given by:
where δ is the Dirac Delta at 0. Here, and are the strength of the synapses, and are the total number of presynaptic neurons and and are the times of the -spike coming from the -presynaptic neuron for excitatory and inhibitory neurons respectively. The stochastic character is embedded in the distribution of the spike times of neurons. Actually, each neuron is assumed to spike according to a stationary Poisson process with constant probability of emitting a spike per unit time ν. Moreover, all these processes are assumed to be independent between neurons. With these assumptions the average value of the current and its variance are given by with and . We will say that the network is average-excitatory (average-inhibitory resp.) if ( resp.).
Being the discrete Poisson processes still very difficult to analyze, many authors in the literature [3–5, 7–9] have adopted the diffusion approximation where the synaptic current is approximated by a continuous in time stochastic process of Ornstein-Uhlenbeck type with the same mean and variance as the Poissonian spike-train process. More precisely, we approximate in (1.2) as
where is the standard Brownian motion, that is, are independent Gaussian processes of zero mean and unit standard deviation. We refer to the work  for a nice review and discussion of the diffusion approximation which becomes exact in the infinitely large network limit, if the synaptic efficacies and are scaled appropriately with the network sizes and .
Finally, another important ingredient in the modelling comes from the fact that neurons only fire when their voltage reaches certain threshold value called the threshold or firing voltage . Once this voltage is attained, they discharge themselves, sending a spike signal over the network. We assume that they instantaneously relax toward a reset value of the voltage . This is fundamental for the interactions with the network that may help increase their membrane potential up to the maximum level (excitatory synapses), or decrease it for inhibitory synapses. Choosing our voltage and time units in such a way that , we can summarize our approximation to the stochastic differential equation model (1.1) as the evolution given by
for with the jump process: whenever at the voltage achieves the threshold value ; with . Finally, we have to specify the probability of firing per unit time of the Poissonian spike train ν. This is the so-called firing rate and it should be self-consistently computed from a fully coupled network together with some external stimuli. Therefore, the firing rate is computed as , see  for instance, where is the mean firing rate of the network. The value of is then computed as the flux of neurons across the threshold or firing voltage . We finally refer to  for a nice brief introduction to this subject.
Coming back to the diffusion approximation in (1.3), we can write a partial differential equation for the evolution of the probability density of finding neurons at a voltage at a time . A heuristic argument using Ito’s rule [3–5, 7–9, 11] gives the backward Kolmogorov or Fokker-Planck equation with sources
with and . We have the presence of a source term in the right-hand side due to all neurons that at time fired, sent the signal on the network and then, their voltage was immediately reset to voltage . Moreover, no neuron should have the firing voltage due to the instantaneous discharge of the neurons to reset value , then we complement (1.4) with Dirichlet and initial boundary conditions
Equation (1.4) should be the evolution of a probability density, therefore
for all . Formally, this conservation should come from integrating (1.4) and using the boundary conditions (1.5). It is straightforward to check that this conservation for smooth solutions is equivalent to characterize the mean firing rate for the network as the flux of neurons at the firing rate voltage. More precisely, the mean firing rate is implicitly given by
Here, the right-hand side is nonnegative since over the interval and thus, . In particular this imposes a limitation on the growth of the function such that (1.6) has a unique solution N. Let us mention that a rigorous passage from the stochastic differential equation with jump processes (1.3) to the nonlinear equation (1.4)-(1.6) is a very interesting issue but outside the scope of this paper, see related results in which nonlinearities are nonlocal functionals in [12, 13].
The above Fokker-Planck equation has been widely used in neurosciences. Often the authors prefer to write it in an equivalent but less singular form. To avoid the Dirac delta in the right hand side, one can also set the same equation on and introduce the jump condition
This is completely transparent in our analysis which relates on a weak form that applies to both settings.
Finally, let us choose a new voltage variable by translating it with the factor while, for the sake of clarity, keeping the notation for the rest of values of the potentials involved . In these new variables, the drift and diffusion coefficients are of the form
where for excitatory-average networks and for inhibitory-average networks, and . Some results in this work can be obtained for some more general drift and diffusion coefficients. The precise assumptions will be specified on each result. Periodic solutions have been numerically reported and analysed in the case of the Fokker-Planck equation for uncoupled neurons in [14, 15]. Also, they study the stationary solutions for fully coupled networks obtaining and solving numerically the implicit relation that the firing rate N has to satisfy, see Section 3 for more details.
There are several other routes towards modeling of spiking neurons that are related to ours and that have been used in neurosciences, see . Among them are the deterministic I&F models with adaptation which are known for fitting well experimental data . General models of this type were unified and studied in terms of neuronal behaviors in . In this case it is known that in the quadratic (or merely superlinear) case, the model can lead to blow-up  in the absence of a fixed threshold. We point out that the nature of this blow-up is completely different from the one discussed in this paper. One can also introduce gating variables in neuron networks and this leads to a kinetic equation, see  and the references therein. Another method consists in coding the information in the distribution of time elapsed between discharges [21, 22], this leads to nonlinear models that exhibit naturally periodic activity and blow-up cannot happen. Nonlinear IF models are able to produce different patterns of activity and excitability types while linear models do not.
In this work we will analyse certain properties of the solutions to (1.4)-(1.5) with the nonlinear term due to the coupling of the mean firing rate given by (1.6). Next section is devoted to a finite time blow-up of weak solutions for (1.4)-(1.6). In short, we show that whenever the value of is, we can find suitable initial data concentrated enough at the firing rate such that the defined weak solutions do not exist for all times. We remark that, in the same sense that Brunel in , we use the term asynchronous for network states for which the firing rate tends asymptotically to constant in time, while we denote by synchronous those for which this does not happen. Therefore, a possible interpretation of the blow-up is that synchronization occurs in the model since the firing rate diverges for a fixed time creating possibly a strong partial synchronization, that is, a part of the network firing at the same time. Although one could also consider the blow-up as an artifact of these solutions, since neurons firing arbitrarily fast is not biologically plausible. As long as the solution exists in the sense specified in Section 2, we can get a priori estimates on the -norm of the firing rate. Section 3 deals with the stationary states of (1.4)-(1.6). We can show that there are unique stationary states for and a constant but for different cases may happen: one, two or no stationary states depending on how large b is. In Section 4, we discuss the linear problem with a constant for which the general relative entropy principle applies implying the exponential convergence towards equilibrium. Finally by means of numerical simulations, in Section 5 we illustrate the results of previous sections about blow-up and steady states. Moreover, this numerical analysis allows us to conjecture about nonlinear stability properties of the stationary states: in case of only one steady state it is asymptotically stable and in case of two different stationary solutions the results show that the one with lower firing rate is locally asymptotically stable while the one with higher stationary firing value is either unstable or with a very small region of attraction. Our results and simulations describe situations which can be identified with neuronal phenomena such as synchronization/asynchronization of a network and bistability networks. Bi- and multi-stable networks have been used, for instance in models of visual perception and decision making [23–25]. Our analysis in Sections 23 and 5 imply that this simple model encodes complicated dynamics, in the sense that, only in terms of the connectivity parameter b, very different situations can be described with this model: blow-up, no steady state, only one steady state and several stationary states.
2 Finite time blow-up and a priori estimates for weak solutions
Since we study a nonlinear version of the backward Kolmogorov or Fokker-Planck equation (1.4), we start with the notion of solution:
Definition 2.1 We say that a pair of nonnegative functionswith, is a weak solution of (1.4)-(1.7) if for any test functionsuch that, , we have
Here, the notation , , refers to the space of functions such that is integrable in Ω, while corresponds to the space of bounded functions in Ω. The set of infinitely differentiable functions in Ω is denoted by used as test functions in the notion of weak solution. These non-negativity assumptions are reasonable. Indeed, for a given , if we were to replace N by in the right hand side of (1.4), we obtain a linear equation which solution is non-negative and (1.6) gives ; that this fixed point may work is a more involved issue, since we prove that there are not always global solutions, which requires functional spaces and this motives the a priori estimates that we derive at the end of this section.
Let us remark that the growth condition on the test function together with the assumption (1.7) imply that the term involving makes sense. By choosing test functions of the form , this formulation is equivalent to say that for all such that , we have that
holds in the distributional sense. It is trivial to check that weak solutions conserve the mass of the initial data by choosing in (2.2), and thus,
The first result we show is that global-in-time weak solutions of (1.4)-(1.6) do not exist for all initial data in the case of an average-excitatory network. This result holds with less stringent hypotheses on the coefficients than in (1.7) with an analogous notion of weak solution as in Definition 2.1.
Theorem 2.2 (Blow-up)
Assume that the drift and diffusion coefficients satisfy
for alland all, and let us consider the average-excitatory network where. Choose. If the initial data is concentrated enough around, in the sense that
is close enough to, then there are no global-in-time weak solutions to (1.4)-(1.6).
Proof We choose a multiplier with and define the number
by hypotheses. For a weak solution according to (2.1), we find from (2.2) that
where (2.4) and the fact that was used. Let us now choose μ large enough such that according to our hypotheses and denote
If initially and using Gronwall’s Lemma since , we have that , for all , and back to (2.5) we find
which in turn implies,
On the other hand, since is a probability density, see (2.3), and then
leading to a contradiction.
It remains to show that the set of initial data satisfying the size condition is not empty. To verify this, we can approximate as much as we want by smooth initial probability densities an initial Dirac mass at which gives the condition
This can be equivalently written as
Choosing , these conditions are obviously fulfilled. □
As usual for this type of blow-up result similar in spirit to the classical Keller-Segel model for chemotaxis [26, 27], the proof only ensures that solutions for those initial data do not exist beyond a finite maximal time of existence. It does not characterize the nature of the first singularity which occurs. It implies that either the decay at infinity is false, although not probable, implying that the time evolution of probability densities ceases to be tight, or the function may become a singular measure in finite time instead of being an function. Actually, in the numerical computations shown in Section 5, we observe a blow-up in the value of the mean firing rate in finite time. To continue the solution will need a modification of the notion of solution introduced in Definition 2.1. It would be useful, since the firing rate does not become constant in time and consequently a possible interpretation is that synchronization occurs, a phenomena that is of interest to neuroscientists, see also the comments in the introduction and the conclusions.
Although in this paper the nature of blow-up is not mathematically identified, we devote the rest of the section to prove some a priori estimates which shed some light on this direction. To be more precise, our estimates indicate that this blow-up should not come from a loss of mass at , or a lack of fast decay rate because the second moment in v is controlled uniformly in blow-up situations. We obtain these a priori bounds with the help of appropriate choices of the test function ϕ in (2.1). Some of these choices are not allowed due to the growth at −∞ of the test functions. We will say that a weak solution is fast-decaying at −∞ if they are weak solutions in the sense of Definition 2.1 and the weak formulation in (2.2) holds for all test functions growing algebraically in v.
Lemma 2.3 (A priori estimates)
Assume (1.7) on the drift and diffusion coefficients and thatis a global-in-time solution of (1.4)-(1.6) in the sense of Definition 2.1 fast decaying at −∞, then the following a priori estimates hold for all:
Moreover, if in addition a is constant then
Proof Using (1.7) together with our decay assumption at −∞, we may use the test function . Then (2.2) gives
This is also written as
To prove (i), we notice that with our condition on b, the term in is nonpositive and the first result follows from Gronwall’s inequality. The second result just follows after integration in time.
To prove (ii), we first use again (2.7) and, because the term in is nonnegative, we find the first result. To obtain the second estimate (2.6), given , we can always choose a smooth truncation function such that
with outside the interval such that
and thus with size of order of . In other words, we have chosen a uniform approximation of the truncation with obtained by integrating twice a smooth suitable approximation of the . Then, equation (2.2) gives
Since is positive and non-decreasing and using (2.8), we get for all there exists small enough, such that for all :
Due to the hypotheses , for any γ such that , we can find small enough such that
Taking into account (2.8), (2.9), and that is a probability density, we have for small enough
Choosing now for instance, integration in time of the last inequality leads to the desired inequality (2.6). □
Corollary 2.4 Under the assumptions of Lemma 2.3 and assumingand, then the following a priori estimates hold:
If additionally a is constant, for allwe have
If additionally, then
Proof We use again the weak formulation (2.1) with as test function and get
thanks to the first statement of Lemma 2.3(ii).
To prove (i), we just use the second statement of Lemma 2.3(ii) valid for a constant which tells us that the time integration of the right-hand side grows at most linearly in time and so does .
To prove (ii), we just use that the bracket is nonpositive and the results follows. □
3 Steady states
This section is devoted to find all smooth stationary solutions of the problem (1.4)-(1.6) in the particular relevant case of a drift of the form . Let us search for continuous stationary solutions p of (1.4) such that p is regular except possibly at where it is Lipschitz. Using the definition in (2.2), we are then allowed by a direct integration by parts in the second derivative term of p to deduce that p satisfies
in the sense of distributions, with H being the Heaviside function, that is, for and for . Therefore, we conclude that
The definition of N in (1.6) and the Dirichlet boundary condition (1.5) imply by evaluating this expression at . Using again the boundary condition (1.5), , we may finally integrate again and find that
which can be rewritten, using the expression of the Heaviside function, as
Moreover, the firing rate in the stationary state N is determined by the normalization condition (2.3), or equivalently,
Summarizing, all solutions p of the stationary problem (3.1), with the above referred regularity, are of the form given by the expression (3.2), where N is any positive solution of the implicit equation (3.3).
Let us first comment that in the linear case and , we then get a unique stationary state given by the Dawson function
with the normalizing constant to unit mass over the interval .
The rest of this section is devoted to find conditions on the parameters of the model clarifying the number of solutions to (3.3). With this aim, it is convenient to perform a change of variables, and use new notations
where the N dependency has been avoided to simplify notation. Then, as in , we can rewrite the previous integral (and thus the condition for a steady state) as
Another alternative form of follows from the change of variables and to get
3.2 Case of
We are now ready to state our main result on steady states.
Theorem 3.1 Assume, is constant and.
Forandsmall enough there is a unique steady state to (1.4)-(1.6).
Under either the condition(3.8)
or the condition
then there exists at least one steady state solution to (1.4)-(1.6).
If both (3.9) andhold, then there are at least two steady states to (1.4)-(1.6).
There is no steady state to (1.4)-(1.6) under the high connectivity condition(3.10)
Remark 3.2 It is natural to relate the absence of steady state for b large with blow-up of solutions. However, Theorem 2.2 in Section 2shows this is not the only possible cause since the blow-up can happen for initial data concentrated enough aroundindependently of the value of. See also Section 5for related numerical results.
Proof Let us first study properties of the function . To do that, we rewrite (3.7) as
Taking the function and Taylor expanding up to second order at , we get with , , and . It is easy to see that
for all . By distinguishing the cases based on the signs of and , this Taylor expansion implies that
for all . Then, a direct application of the dominated convergence theorem and continuity theorems of integrals with respect to parameters show that the function is continuous on N on . Moreover, the function is on N since all their derivatives can be computed by differentiating under the integral sign by direct application of dominated convergence theorems and differentiation theorems of integrals with respect to parameters. In particular,
and for all integers ,
As a consequence, we deduce:
Case : is an increasing strictly convex function and thus
Case : is a decreasing convex function. Also, it is obvious from the previous expansion (3.11) and dominated convergence theorem that
It is also useful to keep in mind that, thanks to the form of in (3.6),
Now, let us show that for , we have
Using (3.11), we deduce
A direct application of dominated convergence theorem shows that the right hand side converges to 0 as since is a bounded function uniform in N and s. Thus, the computation of the limit is reduced to show
With this aim, we rewrite the integral in terms of the complementary error function defined as
Finally, we can obtain the limit (3.14) using L’Hôpital’s rule
With this analysis of the function we can now proof each of the statements of Theorem 3.1:
Proof of (i) Let us start with the case . Here, the function is increasing, starting at due to (3.12) and such that
Therefore, it crosses to the function at a single point.
Now, for the case small, we first remark that similar dominated convergence arguments as above show that both and are smooth functions of b. Moreover, it is simple to realize that is a decreasing function of the parameter b. Now, choosing , then for all where denotes the function associated to the parameter . Using the limit (3.13), we can now infer the existence of depending only on such that
for all . Therefore, by continuity of there are solutions to and all possible crossings of and are on the interval . We observe that both and converge towards the constant function and to 0 respectively, uniformly in the interval as . Therefore, for b small is strictly increasing on the interval and there is a unique solution to . □
Proof of (ii)
Case of (3.8) The claim that there are solutions to for is a direct consequence of the continuity of , (3.12) and (3.13).
Case of (3.9) We are going to prove that for , which concludes the existence of a steady state since due to (3.12) implies that for small N. Condition (3.9) only asserts that this interval for N is not empty. To do so, we show that
which obviously concludes the desired inequality for the interval of N under consideration.
The condition is equivalent to , therefore, using (3.5) and the expression for in (3.6), we deduce
Since and is an increasing function for , then on , and we conclude
Proof of (iii) Under the condition (3.9), we have shown in the previous point the existence of an interval where . On one hand, in (3.12) implies that for N small and the condition implies that for N large enough due to the limit (3.13), thus there are at least two crossings between and . □
Proof of (iv) Under assumption (3.10) for b, it is easy to check that the following inequalities hold
We consider N such that , this means that . We use the formula (3.7) for and write the inequalities
where the mean-value theorem and were used. Then, we conclude that
Therefore, using Inequality (3.16):
and due to the fact that I is decreasing and Inequality (3.15), we have , for . In this way, we have shown that for all N, and consequently there is no steady state. □
Remark 3.3 The functionsandare depicted in Figure 1for the caseandillustrating the main result: steady states exist for small b and do not exist for large b while there is an intermediate range of existence of two stationary states. The numerical plots of the functionmight indicate that there are only three possibilities: one stationary state, two stationary states and no stationary state. However, we are not able to prove or disprove the uniqueness of a maximum for the functioneventually giving this sharp result.
Remark 3.4 The condition (3.9) is not optimal and it can be improved by using one more term in the series expansion of the exponentials inside the integral of the expression ofin (3.7). More precisely, if, we use
In this way, we get
Then, condition (3.9) can be improved to
Of course, this last inequality is not optimal either for the same reason as before.
3.3 Case of
We now treat the case of , with with . Proceeding as above we can obtain from (3.7) the expression of its derivative
Therefore is decreasing since
Moreover, we can check that the computation of the limit (3.13) still holds. Actually, we have
where we have used the change and L’Hôpital’s rule. In the case , we can observe again by the same proof as before that when , and thus, by continuity there is at least one solution to . Nevertheless, it seems difficult to clarify perfectly the number of solutions due to the competing monotone functions in (3.17).
The generalization of part of Theorem 3.1 is contained in the following result. We will skip its proof since it essentially follows the same steps as before with the new ingredients just mentioned.
Corollary 3.5 Assume, with.
Under either the condition, or the conditionsand, then there exists at least one steady state solution to (1.4)-(1.6).
If bothandhold, then there are at least two steady states to (1.4)-(1.6).
There is no steady state to (1.4)-(1.6) for.
These behaviours are depicted in Figure 2. Let us point out that if a is linear and , has not to be strictly increasing as in the constant diffusion case and it may have a minimum for .
At this point, a natural question is what happens with the stability of these steady states. In the next section we study it in the linear case, when the model presents only one steady state. An extension of the same techniques, entropy methods, to the nonlinear case is not straightforward at all. However, the results obtained in the linear case let us expect that for small connectivity parameter b the only steady state could be stable. On the other hand, numerical results presented in Section 5 give some numerical evidence of the stability/instability in different situations described by this model: only one steady state or two steady states, see that section for details.
4 Linear equation and relaxation
We study specifically the case of a linear equation that is and , that is,
For later purposes, we remind that the steady state given in (3.4) satisfies, for the case at hand, the equation
We will assume in this section that the initial data satisfies, for some
Then, we take for granted that solutions of the linear problem exist, with the regularity needed in each result below and such that for all
The estimate (4.4) follows a posteriori from the relative entropy inequality that we state below, see more comments at the end of this section. This is an indication that the hypothesis for the initial data (4.3) will easily be propagated in time giving (4.4) with a well-posedness theory of classical fast-decaying solutions at hand. These solutions to (4.1) and (4.2) might be obtained by the method developed in  and will be analysed elsewhere.
We prove that the solutions to (4.1) converge in large times to the unique steady state .
Theorem 4.1 (Exponential decay)
Fast-decaying solutions to the equation (4.1) verifying (4.4) satisfy
This result shows that no synchronization of neuronal activity can be expected when the network is not connected, since solutions tend to produce a constant firing rate, a very intuitive conclusion. Because the rate of decay is exponential, we also expect that small connectivity cannot create synchronization either, again an intuitive conclusion proved rigorously for the elapsed time structured model in . Also the proof shows that two relaxation processes are involved in this effect: dissipation by the diffusion term and dissipation by the firing term. These relaxation effects are stated in the following theorem which also gives the natural bounds for the solutions to equation (4.1) (choosing gives the natural energy space of the system, a weighted space).
Theorem 4.2 (Relative entropy inequality)
Fast-decaying solutions to equation (4.1) verifying (4.4) satisfy, for any smooth convex function, the inequality
Proof of Theorem 4.1 The proof is standard in Fokker-Planck theory and follows by applying the relative entropy inequality (4.5) with . Then, we obtain
To proceed further, we need an additional technical ingredient that we state and whose proof is postponed to an Appendix.
Proposition 4.3 There exists such that
for all functions q suchand.
Poincaré’s-like inequality in Proposition 4.3 applied to bounds the right hand side on (4.6)
Finally, the Gronwall lemma directly gives the result. □
To show Theorem 4.2, which was used in the proof of Theorem 4.1, we need the following preliminary computations.
Lemma 4.4 Given p a fast-decaying solution of (4.1) verifying (4.4), given by (3.4) and G aconvex function, then the following relations hold:
Proof Since we obtain
Using these two expressions in
we obtain (4.7).
Equation (4.8) is a consequence of Equation (4.7) and the following expressions for the partial derivatives of :
Finally, Equation (4.9) is obtained using Equation (4.8) and the fact that is solution of (4.2). □
Proof of Theorem 4.2 We integrate from −∞ to in (4.9) and let α tend to 0+ and use L’Hôpital’s rule
Since , then
The Dirichlet boundary condition (1.5) implies that
where we used that
due to (4.10). Collecting all terms leads to the desired inequality. □
Let us finally remark that as a usual consequence of the General Relative Entropy principle (GRE) , the estimate (4.4) follows by choosing the convex function . This shows that the bound (4.4) can be proved using (4.3) together with a well-posedness theory of classical fast-decaying at −∞ solutions to (4.1).
5 Numerical results
We consider two different explicit methods to simulate the NNLIF (1.4). The first one is based on standard shock-capturing methods for the advection term and standard centered finite differences for the second-order term. More precisely, the first order term is approximated by finite difference WENO-schemes .
The second numerical method is based on another finite difference scheme for the Fokker-Planck equation proposed in the literature called the Chang-Cooper method . This method was also used in  for a computational neuroscience model with variable voltage and conductance. In order to use this method, the first step is to rewrite the Fokker-Planck equation (1.4) in terms of the Maxwellian as follows,
Then, the Chang-Cooper method performs a kind of θ-finite difference approximation of , see  for details. The Chang-Cooper method presents difficulties when the firing rate becomes large and the diffusion coefficient is constant. More precisely, given and , if N is large, the drift of the Maxwellian, in terms of which is rewritten the Fokker-Planck equation, practically vanishes on the interval and this particular Chang-Cooper method is not suitable. Whenever is not constant, this problem disappears.
Summarizing, we consider two different schemes for our simulations: the first one is based on WENO-finite differences as described above, and the second one by means of the cited Chang-Cooper method. In both cases the evolution on time is performed with a TVD Runge-Kutta scheme. In Section 2 of  these schemes are explained in details and we refer to [30, 31] for a deeper analysis of them.
In our simulations we consider a uniform mesh in v, for . The value (less than ) is adjusted in the numerical experiments to fulfill that , while is fixed to 2 and . As initial data we have taken two different types of functions:
where the mean and the variance are chosen according to the analyzed phenomenon.
Stationary Profiles (3.2) given by
with N an approximate value of the stationary firing rate. We typically consider this kind of initial data to analyze local stability of steady states.
Steady states. - As we show in Section 3, for b positive there is a range of values for which there are either one or two or no steady states. With our simulations we can observe all the cases represented in Figures 1 and 2.
In Figure 3 we show the time evolution of the distribution function , in the case of and for which there is only one steady state according to Theorem 3.1, considering as initial data a Maxwellian with and in (5.1). We observe that the solution after 3.5 time units numerically achieves the steady state with the imposed tolerance. The top left subplot in Figure 4 describes the time evolution of the firing rate, which becomes constant after some time. This clearly corresponds to the case of a unique locally asymptotically stable stationary state. Let us remark that in the right subplot of Figure 3, we can observe the Lipschitz behavior of the function at as it should be from the jump in the flux and thus on the derivative of the solutions and the stationary states, see Section 3.
For , we proved in Section 3 that there are two steady states. With our simulations we can conjecture that the steady state with larger firing rate is unstable. However the stationary solution with low firing rate is locally asymptotically stable. We illustrate this situation in the bottom left subplot in Figure 4. Starting with a firing rate close to the high stationary firing value, the solution tends to the low firing stationary value.
In Figure 5 we analyze in more details the behavior of the steady state with larger firing rate. The left subplot presents the evolution on time of the firing rate for different distribution function starting with profiles given by the expression (3.2) with N an approximate value of the stationary firing rate. We show that, depending of the initial firing rate considered, its behavior is different: tends to the lower steady state or goes to infinity. The firing rate for the solution with initial remains almost constant for a period of time. Observe in Figure 5 that the difference between the initial data and the distribution function at time is almost negligible. However, the system evolves slowly and at the distribution is very close to the the lower steady state, see the bottom left subplot in Figure 4.
In the bottom right subplot of Figure 4 we observe the evolution for a negative value of b, where we know that there is always a unique steady state, and its local asymptotic stability seems clear from the numerical experiments.
The number of steady states is related with well-known neuronal phenomena, for instance, asynchronous behavior, when there exists only one stationary solution. Let us mention that bi- and multi-stable networks have been used to describe binocular rivalry in visual perception  and the process of decision making . Our simulations show that the simple NNLIF model (1.4) describes networks with only one steady states and several stationary states, in terms of the connectivity parameter b.
No steady states. - The results in Section 3 indicate that there are no steady states for . In Figure 6 we observe the evolution on time of the distribution function p for this choice of the connectivity parameter b. In Figure 4 (right top) we show the time evolution of the firing rate, which seems to blow up in finite time. We observe how the distribution function becomes more and more picked at and producing an increasing value of the firing rate. The synchronization is a phenomenon that is of interest to neuroscientists, in these figures we do not observe it, but they do not show desynchronization either, since there are not steady states and no convergence of the firing rate. Therefore it could be possible to find periodic solutions which describe this phenomenon allowing for the formation of point Diracs in the firing rate. In this way, it could be related with the situation that we observe in Figures 7 and 8, where blow-up is analyzed, since the firing rate looks like tends to be a Dirac Delta.
Blow up. - According to our blow-up Theorem 2.2, the blow-up in finite time of the solution happens for any value of if the initial data is concentrated enough on the firing rate. In Figures 7 and 8, we show the evolution on time of the firing rate with an initial data with mass concentrated close to for values of b in which there are either a unique or two stationary states. The firing rate increases without bound up to the computing time. It seems that the blow-up condition in Theorem 2.2 is not as restrictive as to say that the initial data is close to a Dirac Delta at , as it is showed in Figure 7, where the initial condition is far from Dirac Delta at . Let us finally mention that blow-up appears numerically also in case of , but here the blow-up scenario is characterized by a break-up of the condition under which (1.6) has a unique solution N, that is,
Therefore, the blow-up in the value of the firing rate appears even if the derivative of p at the firing voltage does not diverge. This kind of behavior could be interpreted as a synchronization of a part of the network, since the firing rate tends to be a Dirac Delta.
The nonlinear noisy leaky integrate and fire (NNLIF) model is a standard Fokker-Planck equation describing spiking events in neuron networks. It was observed numerically in various places, but never stated as such, that a blow-up phenomena can occur in finite time. We have described a class of situations where we can prove that this happens. Remarkably, the system can blow-up for all connectivity parameter , whatever is the (stabilizing) noise.
The nature of this blow-up is not mathematically proved. Nevertheless, our estimates in Lemma 2.3 indicate that it should not come from a vanishing behaviour for , or a lack of fast decay rate because the second moment in v is controlled uniformly in blow-up situations. Additionally, numerical evidence is that the firing rate blows-up in finite time whenever a singularity in the system occurs. This scenario is compatible with all our theoretical knowledge on the NNLIF and in particular with estimates on the total network activity (firing rate ). Further understanding of the nature of blow-up behavior, and possible continuation, is a challenging mathematical issue. This blow-up phenomenon could be related to synchronization of the network therefore an interpretation in terms of neurophysiology would be interesting.
On the other hand, we have established that the set of steady states can be empty, a single state or two states depending on the network connectivity. These are all compatible with blow-up profile, and when they exist, numerics can exhibit convergence to a steady state, that is, to an asynchronous state for the network. Besides better understanding of the blow-up phenomena, several questions are left open; is it possible to have triple or more steady states? Which of them are stable? Can a bifurcation analysis help to understand and compute the set of steady states?
This appendix is devoted to prove a Hardy-Poincaré’s-like inequality more general that the one stated in Proposition 4.3. Given on such that , we want to show that for all functions f on , such that , the following inequality holds:
provided all integrals make sense. Proposition 4.3 follows by considering , where K is a constant in such a way that and parameterizing the interval from instead of . Note that is equivalent to the function m both at 0 and at ∞ in terms of asymptotic behavior. We point out that for this kind of functions, , the Muckenhoupt’s criterion for Poincare’s inequality or their variants in [32, 33] cannot be used, since is not integrable at zero. However, the inequality (7.1) holds provided that . The main ingredient in the proof of (7.1) is to write
where we used and . In this way we can consider two functions, to be chosen later, to obtain
Therefore, we have
where, using the notation,
To conclude the proof we analyse both terms. Considering Fubini’s Theorem we obtain
To conclude the proof, it remains to choose the functions ϕ and ψ such that, . Using L’Hôpital’s rule and, after some tedious but easy computations and calculus arguments, we obtain that in the case at hand, , we can take .
Lapicque L: Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarisation. J. Physiol. Pathol. Gen. 1907, 9: 620–635.
Tuckwell H: Introduction to Theoretical Neurobiology. Cambridge Univ. Press, Cambridge; 1988.
Brunel N, Hakim V: Fast global oscillations in networks of integrate-and-fire neurons with long fiting rates. Neural Comput. 1999, 11: 1621–1671. 10.1162/089976699300016179
Brunel N: Dynamics of sparsely connected networks of excitatory and inhibitory spiking networks. J. Comput. Neurosci. 2000, 8: 183–208. 10.1023/A:1008925309027
Renart, A., Brunel, N., Wang, X.-J.: Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. In: Feng, J. (ed.) Computational Neuroscience: A Comprehensive Approach. Mathematical Biology and Medicine Series. Chapman & Hall/CRC (2004)
Compte A, Brunel N, Goldman-Rakic PS, Wang X-J: Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb. Cortex 2000, 10: 910–923. 10.1093/cercor/10.9.910
Sirovich L, Omurtag A, Lubliner K: Dynamics of neural populations: stability and synchrony. Network 2006, 17: 3–29. 10.1080/09548980500421154
Omurtag A, Knight BW, Sirovich L: On the simulation of large populations of neurons. J. Comput. Neurosci. 2000, 8: 51–63. 10.1023/A:1008964915724
Mattia M, Del Giudice P: Population dynamics of interacting spiking neurons. Phys. Rev. E 2002., 66:
Guillamon T: An introduction to the mathematics of neural activity. Butl. Soc. Catalana Mat. 2004, 19: 25–45.
Risken H: The Fokker-Planck Equation: Methods of solution and applications. Springer-Verlag, Berlin; 1989.
Burkholder, D.L., Pardoux, É., Sznitman, A.: École d’Été de Probabilités de Saint-Flour XIX—1989. In: Hennequin, P. L. (ed.) Lecture Notes in Mathematics, vol. 1464. Springer-Verlag, Berlin (1991). Papers from the school held in Saint-Flour, August 16–September 2, 1989
Bolley, F., Cañizo, J.A., Carrillo, J.A.: Stochastic mean-field limit: non-Lipschitz forces & swarming. Math. Mod. Meth. Appl. Sci. (2011, in press) Bolley, F., Cañizo, J.A., Carrillo, J.A.: Stochastic mean-field limit: non-Lipschitz forces & swarming. Math. Mod. Meth. Appl. Sci. (2011, in press)
Newhall K, Kovačič G, Kramer P, Rangan AV, Cai D: Cascade-induced synchrony in stochastically-driven neuronal networks. Phys. Rev. E 2010., 82:
Newhall K, Kovačič G, Kramer P, Zhou D, Rangan A, Cai D: Dynamics of current-based, poisson driven, integrate-and-fire neuronal networks. Commun. Math. Sci. 2010, 8: 541–600.
Gerstner W, Kistler W: Spiking Neuron Models. Press Cambridge, Cambridge Univ.; 2002.
Brette R, Gerstner W: Adaptive exponential integrate-and-fire model as an effective description of neural activity. J. Neurophysiol. 2005, 94: 3637–3642. 10.1152/jn.00686.2005
Touboul J: Bifurcation analysis of a general class of nonlinear integrate-and-fire neurons. SIAM J. Appl. Math. 2008, 68: 1045–1079. 10.1137/070687268
Touboul J: Importance of the cutoff value in the quadratic adaptive integrate-and-fire model. Neural Comput. 2009, 21: 2114–2122. 10.1162/neco.2009.09-08-853
Cáceres MJ, Carrillo JA, Tao L: A numerical solver for a nonlinear Fokker-Planck equation representation of neuronal network dynamics. J. Comput. Phys. 2011, 230: 1084–1099. 10.1016/j.jcp.2010.10.027
Pham J, Pakdaman K, Champaguaf C, Vivert J-F: Activity in sparsely connected excitatory neural networks: effect of connectivity. Neural Netw. 1998, 11: 415–434. 10.1016/S0893-6080(97)00153-6
Pakdaman K, Perthame B, Salort D: Dynamics of a structured neuron population. Nonlinearity 2010, 23: 55–75. 10.1088/0951-7715/23/1/003
Gray CM, Singer W: Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex. Proc. Natl. Acad. Sci. USA 1989, 86: 1698–1702. 10.1073/pnas.86.5.1698
Moreno-Bote R, Rinzel J, Rubin N: Noise-induced alternations in an attractor network model of perceptual bistability. J. Neurophysiol. 2007, 98: 1125–1139. 10.1152/jn.00116.2007
Albantakis L, Deco G: The encoding of alternatives in multiple-choice decision making. Proc. Natl. Acad. Sci. USA 2009, 106: 10308–10313. 10.1073/pnas.0901621106
Blanchet A, Dolbeault J, Perthame B: Two-dimensional Keller-Segel model: optimal critical mass and qualitative properties of the solutions. Electron. J. Differ. Equ. 2006, 44: 1–33. (electronic) (electronic)
Corrias L, Perthame B, Zaag H: Global solutions of some chemotaxis and angiogenesis systems in high space dimensions. Milan J. Math. 2004, 72: 1–28. 10.1007/s00032-003-0026-x
González MdM, Gualdani MP: Asymptotics for a symmetric equation in price formation. Appl. Math. Optim. 2009, 59: 233–246. 10.1007/s00245-008-9052-y
Michel P, Mischler S, Perthame B: General relative entropy inequality: an illustration on growth models. J. Math. Pures Appl. 2005, 84: 1235–1260. 10.1016/j.matpur.2005.04.001
Shu, C.-W.: Essentially non-oscillatory and weighted esentially non-oscillatory schemes for hyperbolic conservation laws. In: Cockburn, B., Johnson, C., Shu, C.-W., Tadmor, E., Quarteroni, A. (eds.) Advanced Numerical Approximation of Nonlinear Hyperbolic Equations, vol. 1697, pp. 325–432. Springer (1998)
Buet C, Cordier S, Dos Santos V: A conservative and entropy scheme for a simplified model of granular media. Transp. Theory Stat. Phys. 2004, 33: 125–155. 10.1081/TT-120037804
Ledoux M: The Concentration of Measure Phenomenon. 2001.
Barthe F, Roberto C: Modified logarithmic Sobolev inequalities on . Potential Anal. 2008, 29: 167–193. 10.1007/s11118-008-9093-5
The first two authors acknowledge support from the project MTM2008-06349-C03-03 DGI-MCI (Spain). MJC acknowledges support from the P08-FQM-04267 from Junta de Andalucía (Spain). JAC acknowledges support from the 2009-SGR-345 from AGAUR-Generalitat de Catalunya. BP has been supported by the ANR project MANDy, Mathematical Analysis of Neuronal Dynamics, ANR-09-BLAN-0008-01. The three authors thank the CRM-Barcelona and Isaac Newton Institute where this work was started and completed respectively. The article processing charges have been paid by the projects MTM2008-06349-C03-03 DGI-MCI and P08-FQM-04267 from Junta de Andalucía (Spain).
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Cáceres, M.J., Carrillo, J.A. & Perthame, B. Analysis of nonlinear noisy integrate & fire neuron models: blow-up and steady states. J. Math. Neurosc. 1, 7 (2011). https://doi.org/10.1186/2190-8567-1-7
- Leaky integrate and fire models
- relaxation to steady state
- neural networks