Open Access

A Density Model for a Population of Theta Neurons

The Journal of Mathematical Neuroscience20144:2

DOI: 10.1186/2190-8567-4-2

Received: 15 January 2013

Accepted: 3 September 2013

Published: 17 April 2014

Abstract

Population density models that are used to describe the evolution of neural populations in a phase space are closely related to the single neuron model that describes the individual trajectories of the neurons of the population and which give in particular the phase-space where the computations are made. Based on a transformation of the quadratic integrate and fire single neuron model, the so-called theta-neuron model is obtained and we shall introduce in this paper a corresponding population density model for it. Existence and uniqueness of a solution will be proved and some numerical simulations are presented. The results of existence are compared to previous results of existence or nonexistence (burst) for populations of leaky integrate and fire neurons.

1 Introduction

It is a big challenge to find the most appropriate mathematical model to describe the electrical activity of populations of neurons; it should, in the first place, give a realistic view of the very complex brain activity and be able to describe the emergent phenomena that are observed in vivo, but in the same time, it should keep a certain simplicity that would help to analytically solve it and to numerically implement it.

Our attention has been kept by the so-called population density approach that has been successfully used to describe the evolution of physiologically structured population in many areas of biology, and in particular, in neuroscience. A population density model will track down the evolution of a density function of the population in the state space, which is determined by the structuring variable. In theoretical neuroscience, the concept of a probability density function has been already extensively used ([13]). A step forward, though, was made by applying this concept to model interactions of large populations of sparsely connected neurons ([47]). The connection between probability-density approach and population density approach is based on the observation that, for a large population of similar neurons, the probability density can be interpreted as a population density ([4, 6]). For a method to derive population density models, we refer to [6], where an illustrative exemplification is given for the case of integrate-and-fire neurons. Here, the effect of the synaptic connections has been modeled as a jump in the state variable, the membrane potential in this case, when a neuron of the population receives a synaptic input. For more simulations of networks of integrate-and-fire neurons via population density models, we also refer to [8] and [9]. Another method can be found in [10] where a population density equation has been derived for a population of SRM (spike-response model) neurons with escape noise. A well-posedness result for a population density model of Leaky Integrate-and-Fire (LIF) neurons can be found in [11]. The approach proved to be an useful tool in analyzing special behaviors of neural populations, such as the existence of equilibrium solution ([12]), or the emergence of synchronization of neurons ([1315]).

It is somehow usual to apply the population density formalism to populations of integrate-and-fire neurons, due to the simplicity of the model and to the possibility to express the firing rate in terms of the population density function. We have chosen in this paper to consider a large homogeneous population of neurons that are characterized by the theta-neuron model ([16]). As it is known, the theta neuron model, or Ermentrout–Kopell model, is an alternative version of the Quadratic Integrate-and-Fire (QIF), which is the simplest spiking neuron model. In contrast to the leaky integrate-and-fire model, the QIF model does have a spike generation mechanism, which makes it suitable for us to describe the internal state of a population density function of neurons. Nevertheless, the use of the equivalent theta-neuron model is preferable since it is a continuous version of the QIF model, and the state variable varies in a finite domain. We will come back in the first section of this paper with more details about this subject.

We therefore use the population density formalism in this paper to derive a population density model for a population of theta-neurons and we shall prove the well-posedness of the model by a method similar to those used in [11] or [14] in the case of populations of leaky integrate-and-fire neurons. The main difference between these cases and the one considered in this paper is due to the different expressions of the firing rates of the populations.

The paper is structured as follows: In the first section, the method used in [6] to obtain a population density model for integrate-and-fire neurons is adapted to the case of a homogeneous population of neurons characterized by the quadratic integrate-and-fire model. Based on the Ermentrout–Kopell transformation, the quadratic integrate-and-fire can be written in its equivalent form in terms of a new variable called the phase of a neuron. We next introduce a population density model for the population of neurons that is structured by their phase instead of their membrane’s potential. We continue by proving the well-posedness of the model; in the non-connected case, i.e., when all the neurons of the population receive only an external stimulus, the result we prove is global. In the case of a connected population, we prove a global well-posedness result under an assumption that has sense from a biological point of view. If the above specified assumption is not taken into consideration, the result is only local.

We end this paper by presenting some numerical simulations for the population density model that we introduced, which are compared to direct Monte Carlo simulations.

2 Quadratic Integrate-and-Fire Neurons: Population Density Approach

The quadratic integrate-and-fire model was introduced in [17] and consists in an ordinary differential equation that models the evolution in time of the membrane’s potential, and a reset mechanism. We consider in this paper a model that describes the dynamics of a (QIF) neuron that receives external stimuli:
{ d d t v ( t ) = v 2 ( t ) + I b + h j = 1 + δ ( t t j ) If  v = + then  v = .
(1)

Here, v ( t ) represents the potential of the neural membrane at time t, t j are the arrival times of external impulses, and the effect of the reception of a spike at neuron’s synapse has been modeled as a jump of size h of the potential v. The jump is positive (respectively negative) if the spike is received from an excitatory (respectively inhibitory) source. Due to the quadratic term, v can reach infinity in finite time. The time when v reaches the infinity value is considered as the time when the neuron is emitting a spike and the potential of the membrane is instantaneously reset to −∞. The parameter I b plays a key role in the dynamics of the (QIF) model of neuron’s potential (see [18, 19], and [20] for details).

Let us now introduce the population density function such that
v 1 v 2 p ( t , v ) d v = { Fraction of neurons having at time  t the potential of the membrane  v [ v 1 , v 2 ] } .
Then p ( t , v ) is the relative density of the population that has at time t the potential of the membrane v and one has
+ p ( t , v ) d v = 1 .

We shall follow below the same assumptions and derivation formalism as those used in [4, 6, 7] for the case of leaky integrate-and-fire neurons, assumptions that will be shortly reminded below.

One of the main hypotheses used to obtain a population density model is that the population is homogeneous, i.e., all the neurons of the population have the same properties, and, in our case, are individually described by the model (1).

The spikes received by the neurons of the population, either from internal or external sources, are supposed to be uniformly distributed in the population, and let σ ( t ) be the average spike arrival rate. Given a state v, the flux flowing through the state v is supposed to be composed of two parts: a drift flux due to the continuous evolution determined by the (QIF) model (1) and a flux due to synaptic connections among the neurons of the population. The flux due to synaptic connections is generated by all the neurons that jump from the state v h into the state v whenever an electric impulse is received. Thus, the total flux is defined as
J ( t , v ) = ( v 2 + I b ) p ( t , v ) + σ ( t ) v h v p ( t , w ) d w .
(2)
Therefore, the evolution in time of the density function p is given by
t p ( t , v ) = v J ( t , v )
(3)
which can be written equivalently as
t p ( t , v ) + v ( ( v 2 + I b ) p ( t , v ) ) Quadratic integrate - and - fire + σ ( t ) ( p ( t , v ) p ( t , v h ) ) Excitation = 0 .
(4)
A periodic boundary condition for the flux is imposed next, which is consistent with the reset mechanism of the single neuron model (1):
lim v ( v 2 + I b ) p ( t , v ) = lim v + ( v 2 + I b ) p ( t , v ) .
(5)
Due to the boundary condition, one can check easily the conservation property of Eq. (4) by simple integration on the interval ( , + ) ,
d d t + p ( t , v ) d v = 0 .
(6)
Let r ( t ) be the firing rate of the population, that is, the flux through v = + and J the average connection per neuron
r ( t ) = lim v + ( v 2 + I b ) p ( t , v ) .
(7)
Throughout this paper, the average spike arrival rate σ ( t ) is defined as the sum of a given external reception rate σ 0 ( t ) that models the impulses received from other populations of neurons, and a term that models the impulses received from the rest of the neurons in the same population. The second term can be considered in two ways: either we neglect the synaptic conduction delays within the population (Fig. 1), in which case σ is written as
σ ( t ) = σ 0 ( t ) + J r ( t ) ,
or we take into account synaptic delays (Fig. 2) and write
σ ( t ) = σ 0 ( t ) + J 0 t α ( u ) r ( t u ) d u ,
(8)
with
0 α ( u ) d u = 1 ,
(9)
where α is a delay density function.
Fig. 1

Scheme of a population under an external influence without conduction delay. The population receives a known external influence σ 0 ( t ) , and produces the activity r ( t ) . The feedback is instantaneous and given by J r ( t )

Fig. 2

Scheme of a population under an external influence with conduction delay. The population receives a known external influence σ 0 ( t ) from an excitatory population of neurons, and produces an activity r ( t ) —the firing rate of the population. The feedback is then given by J 0 t α ( u ) r ( t u ) d u

We can now give the model in its complete form:
{ t p ( t , v ) + v ( ( v 2 + I b ) p ( t , v ) ) = σ ( t ) ( p ( t , v h ) p ( t , v ) ) , ( t , v ) ( 0 , + ) × ( , + ) σ ( t ) = σ 0 ( t ) + J 0 t α ( s ) r ( t s ) d s , t 0 r ( t ) = lim v + ( v 2 + I b ) p ( t , v ) , t 0 lim v ( v 2 + I b ) p ( t , v ) = lim v + ( v 2 + I b ) p ( t , v ) , t 0 p ( 0 , v ) = p 0 ( v ) , v ( , + ) .
(10)

In the model above, the case of instantaneous reception of the impulses can be obtained by taking α = δ ( 0 ) .

Note that if the initial condition satisfies + p 0 ( v ) d v = 1 , then the solution to the nonlinear problem (10) also satisfies + p ( t , w ) d w = 1 .

In our paper, σ 0 stands for the rate of the Poisson spike train that each neuron receives from an external source, which is not explicitly modeled. The rate σ 0 is then considered as given. The case of a probability density model where the Poisson spike train is approximated by the sum of a deterministic baseline and a white noise has been considered in [21]. In the paper [22], the authors derived an explicit formula of the firing rate of a noisy quadratic integrate-and-fire neuron with and without the synaptic dynamics. It is possible to look at this formula as the second-order approximation of the firing rate of a neural network where each neuron receives an independent Poisson spike train.

In the paper [23], the authors study the firing rate of the noisy quadratic integrate-and-fire neuron receiving an oscillatory input. To this end, the authors used the so-called linear response theory. The theory is not really adapted to a neural network where each neuron receives an independent Poisson spike train since the transfer function cannot be computed explicitly.

3 A Population Density Model for Theta Neurons

We shall shortly recall the derivation of the theta-neuron model (Ermentrout–Kopell). Let us consider a non-connected (QIF) neuron, i.e., its membrane potential is given by
{ d d t v ( t ) = v 2 ( t ) + I b If  v = + then  v = .
(11)
Then, by taking the transformation
θ = 2 arctan v + π ,
(12)
one can prove directly by changing the variable v = tan θ π 2 in the first equation of (11), that the evolution in time of the new variable θ, called the phase, is given by
d d t θ ( t ) = ( 1 + cos θ ) + ( 1 cos θ ) I b .
(13)
Obviously, the following correspondences take place:
v + θ 2 π , v θ 0 .

That means that the reset mechanism in (11) is replaced in this model by the simple passing of the phase of the neurons, θ, through the value 2π.

Now, if we consider next a coupled neuron which is described by the model (1), corresponding to the jump in potential generated by an impulse arrival
v v + h ,
(14)
we have a phase shift (see [24]), given by
θ 2 arctan ( h + tan ( θ π 2 ) ) + π .
Or, equivalently, if a neuron receiving an impulse has a jump in potential
v h v ,
(15)
then, the phase θ changes correspondingly as
s ( θ ) : = 2 arctan ( tan ( θ π 2 ) h ) + π θ .
(16)
By continuity, we extend the formula at θ = 0 by
0 0 ,
which means that a neuron which receives an impulse at the time of spike emission will not have a phase shift. The evolution of the function s with respect to phase θ is exemplified in Fig. 3.
Fig. 3

Left: A graphical representation of the function s ( θ ) , given by (16), in ( 0 , 2 π ) . Right: Plot of the function θ s ( θ ) . In both figures, each curve corresponds to a different value of the potential jump h: the blue curve h = 3 , the black curve h = 5 , the red curve h = 10

Then the evolution of the phase of a connected neuron is given by
d d t θ ( t ) = ( 1 + cos θ ) + ( 1 cos θ ) I b + ( 2 arctan ( h + tan ( θ π 2 ) ) + π θ ) j = 1 + δ ( t t j ) .
(17)

Based on the transformation of the model (1) into the model (17), we intend to obtain a corresponding population density model for a population of neurons characterized by their phase θ. The advantages of doing so are obvious: first of all, through this transformation, the state space v ( , + ) is transformed into a finite one θ ( 0 , 2 π ) . More than that, the reset mechanism which creates a discontinuity in the state v will be replaced by a continuous flow through the state 2π, which will influence the expression of the firing rate of the population, as it can be seen below.

As before, if we denote by q ( t , θ ) the density of neurons having phase θ at time t, then
θ 1 θ 2 q ( t , θ ) d θ = { Fraction of neurons with phase  θ [ θ 1 , θ 2 ]  at time  t } ,
and one can assume once again that
0 2 π q ( t , θ ) d θ = 1 .
As in the previous section, we assume the homogeneity of the population and the uniform distribution of the average reception rate over the neurons of the population. Similarly, we consider the flux flowing now through a state θ as formed of the drifting flux due to the continuous evolution of the phase of the neurons due to (13), and the flux determined by the phase shifting generated by the arrival of synaptic impulses:
J ( t , θ ) = f ( θ ) q ( t , θ ) + σ ( t ) s ( θ ) θ q ( t , y ) d y ,
(18)
where
f ( θ ) = ( 1 + cos θ ) + ( 1 cos θ ) I b , s ( θ ) = 2 arctan ( tan ( θ π 2 ) h ) + π .
(19)
Then, corresponding to Eq. (4), we obtain
t q ( t , θ ) + θ ( f ( θ ) q ( t , θ ) ) Theta neuron = σ ( t ) ( s ( θ ) q ( t , s ( θ ) ) q ( t , θ ) ) Excitation ,
(20)

where the functions f and s are defined by (19).

Due to the fact that the second term of the flux (18) does not affect the neurons at the firing state, the boundary condition becomes in this case:
f ( 0 ) q ( t , 0 ) = f ( 2 π ) q ( t , 2 π ) q ( t , 0 ) = q ( t , 2 π ) .
The same argument is applied to obtain the expression of the firing rate, which was defined as the flux through the phase 2π:
r ( t ) = f ( 2 π ) q ( t , 2 π ) = 2 q ( t , 2 π ) .
(21)

We can underline now few differences between the expression of the firing rate in the case of a theta-neuron population and that of a population of leaky integrate-and-fire neurons. The first one has been stated above; if in the case of leaky integrate-and-fire populations, the firing rate was taking into account only the “jumping” part of the flux, we have here the opposite case, since only the drift flux influences the rate of neurons at the firing phase. Another major difference is that, in our model, the firing rate does not explicitly depend on the average reception rate σ as it is the case in the leaky integrate-and-fire population density models ([6, 11]).

Using the boundary condition, and integrating (20) on the domain ( 0 , 2 π ) , one can easily check the conservation property of Eq. (20).

Therefore, the evolution in time of the density function q ( t , θ ) is described by the following system:
{ t q ( t , θ ) + θ ( f ( θ ) q ( t , θ ) ) = σ ( t ) ( s ( θ ) q ( t , s ( θ ) ) q ( t , θ ) ) , t > 0 , θ ( 0 , 2 π ) σ ( t ) = σ 0 ( t ) + J 0 t α ( s ) r ( t s ) d s , t 0 r ( t ) = 2 q ( t , 2 π ) , t 0 q ( t , 0 ) = q ( t , 2 π ) , t 0 q ( 0 , θ ) = q 0 ( θ ) , θ [ 0 , 2 π ] ,
(22)

where, as before, if we take α as a given function of time, we obtain the case where synaptic delays are considered, whereas for α ( t ) = δ ( t ) , we obtain the case of instantaneous synaptic transmission.

The models (10) and (22) are obviously related through the following relation between the density functions p and q:
p ( t , v ) d v = q ( t , θ ) d θ t ( 0 , + ) .

4 Existence and Uniqueness of the Solution

In this section, we shall prove the existence and uniqueness of the solution to problem (22). This will be done first in the linear case, i.e., when σ ( t ) = σ 0 ( t ) (with σ 0 a given function), and later in the general nonlinear case.

4.1 The Linear Case

In this subsection, we are going to prove the global existence of a unique solution for the linear version of the model. Assuming that J is zero, which corresponds to the case when the neurons of the population are not connected but each of them receives an external input σ 0 , the model reduces to the following problem:
{ t q ( t , θ ) + θ ( f ( θ ) q ( t , θ ) ) = σ ( t ) ( s ( θ ) q ( t , s ( θ ) ) q ( t , θ ) ) , t > 0 , θ ( 0 , 2 π ) q ( t , 0 ) = q ( t , 2 π ) , t > 0 q ( 0 , θ ) = q 0 ( θ ) , θ [ 0 , 2 π ] ,
(23)

where σ ( t ) = σ 0 ( t ) is a given continuous function. The main result of the subsection is stated below.

Theorem 1 Let σ C ( 0 , + ) a bounded function and the initial condition q 0 C [ 0 , 2 π ] a periodic function. Then there exists a unique positive solution to problem (23), q C ( [ 0 , + ) × [ 0 , 2 π ] ) , which is periodic with respect to the second argument. Furthermore, the firing rate r ( t ) is bounded by an exponential: for some λ > 0 ,
r ( t ) C e λ t .
Let X λ , where λ > 0 will be specified later, be defined by
X λ = { p C ( [ 0 , + ) × [ 0 , 2 π ] ) / p ( t , 0 ) = p ( t , 2 π ) , and  e λ t | p ( t , ) | L [ 0 , 2 π ] < } .
We endow X λ with the following norm:
p = ess sup t [ 0 , + ( e λ ( t ) | p ( t , ) | L [ 0 , 2 π ] .
(24)
Let us introduce on X λ the mapping F defined by
F : X λ X λ m q ,
(25)
where q is the solution to the problem
{ t q ( t , θ ) + θ ( f ( θ ) q ( t , θ ) ) = σ 0 ( t ) ( s ( θ ) m ( t , s ( θ ) ) q ( t , θ ) ) , t > 0 , θ ( 0 , 2 π ) q ( t , 0 ) = q ( t , 2 π ) , t 0 q ( 0 , θ ) = q 0 ( θ ) , θ [ 0 , 2 π ] .
(26)
In order to prove Theorem 1, we shall use the Banach’s fixed-point theorem for the application F. First of all, let us introduce more rigorously the notion of a solution to our system. First, we define a characteristic line as the solution to
θ ˙ ( t ) = f ( θ ( t ) ) , θ ( 0 ) = θ 0 ,
(27)
where
f ( θ ) = ( 1 + cos θ ) + ( 1 cos θ ) I b .
Since f is a Lipschitz continuous function on [ 0 , 2 π ] , there exists a unique solution to problem (27) that gives the characteristic curve that starts from a point θ 0 at t = 0 , and it can be extended to every t > 0 by periodicity, due to the periodicity of f. Actually, it will be more helpful to define the characteristic in the equivalent way, as follows: for every t 0 fixed, for every θ [ 0 , 2 π ] , there exists a single curve, let us denote it c [ ( 0 , θ 0 ) ] ( t ) , such that
c ˙ [ ( 0 , θ 0 ) ] ( t ) = f ( c [ ( 0 , θ 0 ) ] ( t ) ) , c [ ( 0 , θ 0 ) ] ( t ) = θ , c [ ( 0 , θ 0 ) ] ( 0 ) = θ 0 .
(28)

We have used here a different notation for a curve starting from a point ( 0 , θ 0 ) in order to avoid confusions. Due to the properties of the function f, we will have that, for any given point ( t , θ ) there is a unique initial point ( 0 , θ 0 ) .

The computations below were considered for all the cases I b < 0 , I b = 0 and I b > 0 . The way the characteristics behave in time in each case is different, but this does not affect the results stated below. We just remind that for I b < 0 , Eq. (27) has two equilibria: a stable attractor and a nonstable equilibrium. In Fig. 4, we represented the evolution in time of the characteristics in the case I b < 0 . For I b = 0 , the equation has one equilibrium, which is a saddle node, while for the case I b > 0 , (27) has no equilibrium.
Fig. 4

The evolution in time of the characteristic lines in the case I b < 0

The main problem for defining a solution on these lines is to be sure that they do not cross in order not to lose the diffeomorphic property. By a simple computation one can find that
c [ θ 0 ] ( t ) θ 0 = exp { 0 t f ( c [ θ 0 ] ( τ ) ) d τ } ,
(29)

therefore we have that for any finite t, c [ θ 0 ] ( t ) θ 0 is strictly positive, and then the characteristics starting from different points do not cross. Nevertheless, depending on the sign of f the above derivative can go asymptotically to 0.

On the characteristic lines, we can rewrite (26) as an ordinary differential equation
d d t q ( t , c [ ( 0 , θ 0 ) ] ( t ) ) = σ 0 ( t ) s ( c [ ( 0 , θ 0 ) ] ( t ) ) m ( t , s ( c [ ( 0 , θ 0 ) ] ( t ) ) ) ( σ 0 ( t ) + f ( c [ ( 0 , θ 0 ) ] ( t ) ) ) q ( t , c [ ( 0 , θ 0 ) ] ( t ) ) .
Since the domain [ 0 , T ] × [ 0 , 2 π ] is covered by the above defined characteristic lines, we have that for every ( t , θ ) [ 0 , T ] × [ 0 , 2 π ] ,
q ( t , θ ) = q ( t , c [ ( 0 , θ 0 ) ] ( t ) ) = e 0 t ( σ 0 ( s ) + f ( c [ ( 0 , θ 0 ) ] ( s ) ) ) d s q 0 ( 0 , θ 0 ) + 0 t e u t ( σ 0 ( s ) + f ( c [ ( 0 , θ 0 ) ] ( s ) ) ) d s × σ 0 ( u ) s ( c [ ( 0 , θ 0 ) ] ( u ) ) m ( u , s ( c [ ( 0 , θ 0 ) ] ( u ) ) ) d u .
(30)
By direct computation one gets that
f ( c [ ( 0 , θ 0 ) ] ( t ) ) = ( I b 1 ) sin ( c [ ( 0 , θ 0 ) ] ) ( t ) , | f ( θ ) | | I b 1 | ,
(31)
and
s ( θ ) = 2 [ tan ( ( θ π ) / 2 ) h ] 2 + 1 1 cos 2 ( ( θ π ) / 2 ) 1 2 = 1 [ sin ( ( θ π ) / 2 ) h cos ( ( θ π ) / 2 ) ] 2 + cos 2 ( ( θ π ) / 2 ) β .
(32)
Let us denote in the following
σ 0 ̲ = inf t 0 σ 0 ( t ) , σ 0 ¯ = sup t 0 σ 0 ( t ) .
(33)
Thus, one has
| exp { 0 t [ σ 0 ( s ) + f ( c [ ( 0 , θ 0 ) ] ( s ) ) ] d s } | exp { t ( σ 0 ̲ + | 1 I b | ) } : = exp { t M } , t 0 .
Let us prove the contraction property of the map F and take m 1 , m 2 two solutions to the problem; then
| F ( m 1 ) ( t , c [ ( 0 , θ 0 ) ] ( t ) ) F ( m 2 ) ( t , c [ ( 0 , θ 0 ) ] ( t ) ) | 0 t exp { ( t u ) M } | σ 0 ( u ) | | s ( c [ ( 0 , θ 0 ) ] ( u ) ) | × | m 1 ( u , s ( c [ ( 0 , θ 0 ) ] ( u ) ) ) m 2 ( u , s ( c [ ( 0 , θ 0 ) ] ( u ) ) ) | d u β σ 0 ¯ 0 t exp { ( t u ) M } × | m 1 ( u , s ( c [ ( 0 , θ 0 ) ] ( u ) ) ) m 2 ( u , s ( c [ ( 0 , θ 0 ) ] ( u ) ) ) | d u .
Thus, multiplying the last inequality by e λ t and taking the ess sup with respect to t, one gets that
F ( m 1 ) F ( m 2 ) β σ 0 ¯ λ + M m 1 m 2 { 1 exp { t ( λ + M ) } } β σ 0 ¯ λ + σ 0 ̲ | 1 I b | ( m 1 m 2 ) ,
which implies that, for λ > β σ 0 ¯ σ 0 ̲ + | I b 1 | ,
F ( m 1 ) F ( m 2 ) K ( m 1 m 2 ) , K < 1 ,

which ends the proof.

4.2 The Nonlinear Case

Let us go back now to the general model (22). Below, we prove the existence and uniqueness of a solution locally in time. Then, under an assumption regarding the number of connections per neuron and the delay kernel, the global in time existence is proved.

Theorem 2 Let σ 0 and α be two functions of C ( 0 , + ) and the initial condition q 0 be a periodic function of C [ 0 , 2 π ] . Then one can find T > 0 such that there exists a unique positive solution to the nonlinear problem (22), q C ( [ 0 , T ] × [ 0 , 2 π ] ) , which is periodic with respect to the second argument.

In the following, the computations will be made in the space
X = ( C ( [ 0 , T ] × [ 0 , 2 π ] ) , ) ,
where C stands for the functions that are continuous in time and continuous and periodic in phase. Let us define on X the map G by
G : X X m q ,
where q is the solution to the problem
{ t q ( t , θ ) + θ ( f ( θ ) q ( t , θ ) ) = σ ( t ) [ s ( θ ) m ( t , s ( θ ) ) m ( t , θ ) ] σ ( t ) = σ 0 ( t ) + J f ( 2 π ) 0 t α ( t s ) m ( s , 2 π ) d s q ( t , 0 ) = q ( t , 2 π ) q ( 0 , θ ) = q 0 ( θ ) .

The proof will use the standard Banach–Picard fixed-point theorem applied to the map G with respect to the usual norm on L ( [ 0 , T ] × [ 0 , 2 π ] ) . Below will denote the norm in L .

Let ( t , θ ) [ 0 , T ] × [ 0 , 2 π ] . As before, we define a characteristic c [ ( 0 , θ 0 ) ] ( t ) as a solution to (28) and write the problem along these curves as
d d t q ( t , c [ ( 0 , θ 0 ) ] ( t ) ) = f ( c [ ( 0 , θ 0 ) ] ( t ) ) q ( t , c [ ( 0 , θ 0 ) ] ( t ) ) + σ ( t ) [ s ( c [ ( 0 , θ 0 ) ] ( t ) ) m ( t , s ( c [ ( 0 , θ 0 ) ] ( t ) ) ) m ( t , c [ ( 0 , θ 0 ) ] ( t ) ) ] .
(34)
For any fixed bounded functions m and σ, one can find a unique solution q by integrating (34)
q ( t , θ ) = q ( t , c [ ( 0 , θ 0 ) ] ( t ) ) = e 0 t f ( c [ ( 0 , θ 0 ) ] ( s ) ) d s q 0 ( θ 0 ) + 0 t e u t f ( c [ ( 0 , θ 0 ) ] ( s ) ) d s × σ ( u ) [ s ( c [ ( 0 , θ 0 ) ] ( t ) ) m ( u , s ( c [ ( 0 , θ 0 ) ] ( t ) ) ) m ( u , c [ ( 0 , θ 0 ) ] ( t ) ) ] d u : = G ( m ) .
(35)
It remains therefore to show that the application
m G ( m ) ,
with G defined by (35), and σ given by
σ ( t ) = σ 0 ( t ) + 2 J 0 t α ( t s ) m ( s , 2 π ) d s ,
(36)

has a fixed point.

To prove the invariance of a ball in X, let us take R a positive real number to be fixed later on, and m X such that m R . Then, for every T > 0 :
σ σ 0 + 2 J T α R .
(37)
Choosing for now T 1 2 J α R , the last relation yields:
σ σ 0 + 1 .
First note that, defining M T as
M T = ess sup 0 u t T exp { u t [ f ( c [ θ 0 ] ( s ) ) ] d s } ,
we get that
M T < exp ( T | 1 I b | ) .
(38)
Next, taking the absolute value in (35), we obtain by using the relations (32) and (38):
G ( m ) exp ( T | 1 I b | ) ( q 0 + T { ( σ 0 + 1 ) [ β R + R ] } ) exp ( T | 1 I b | ) ( q 0 + 1 2 J α { ( σ 0 + 1 ) ( β + 1 ) } ) ,

where we have also used the fact that T 1 2 J α R .

Let us assume that the time interval is chosen less than a given value T 0 , and take
T min ( T 0 , 1 2 J α R ) ,
for R defined as a
R = exp ( T 0 | 1 I b | ) ( q 0 + 1 2 J α { ( σ 0 + 1 ) ( β + 1 ) } ) .
Then
T min ( T 0 , 1 exp ( T 0 | 1 I b | ) { 2 J α q 0 + ( σ 0 + 1 ) ( β + 1 ) } ) ,

which shows that the invariance of the ball property takes place locally in time.

Let us go now to the contraction property and take solutions m 1 , m 2 X two solutions to (35). We denote by σ 1 and σ 2 the corresponding quantities defined by (36) to m 1 and m 2 . Then
| ( G ( m 1 ) G ( m 2 ) ) ( t , c [ ( 0 , θ 0 ) ] ( t ) ) | exp ( T 0 | 1 I b | ) × { 0 t [ | σ 1 ( τ ) | | s ( c [ ( 0 , θ 0 ) ] ( τ ) ) | × | m 1 ( τ , s ( c [ ( 0 , θ 0 ) ] ( τ ) ) ) m 2 ( τ , s ( c [ θ 0 ] ( τ ) ) ) | + | s ( c [ ( 0 , θ 0 ) ] ( τ ) ) | | m 2 ( τ , s ( c [ ( 0 , θ 0 ) ] ( τ ) ) ) | | σ 1 ( τ ) σ 2 ( τ ) | ] d τ + 0 t [ | σ 1 ( τ ) | | m 1 ( τ , c [ ( 0 , θ 0 ) ] ( τ ) ) m 2 ( τ , c [ ( 0 , θ 0 ) ] ( τ ) ) | + | m 2 ( τ , c [ ( 0 , θ 0 ) ] ( τ ) ) | | σ 1 ( τ ) σ 2 ( τ ) | ] d τ } .
Using the fact that the solutions are elements of X, the bound for σ given by (37), and, again, the relations (32) and (38), we obtain:
( G ( m 1 ) G ( m 2 ) ) exp ( T 0 | 1 I b | ) T { ( β + 1 ) ( σ 0 + 1 ) m 1 m 2 + ( β + 1 ) 2 R J T α m 1 m 2 } T exp ( T 0 | 1 I b | ) { ( β + 1 ) ( σ 0 + 2 ) } m 1 m 2 ,
where we have also used 2 J T R α < 1 . Choosing now T such that
T < min { T 0 , 1 exp ( T 0 | 1 I b | ) ( β + 1 ) ( σ 0 + 2 ) , 1 exp ( T 0 | 1 I b | ) { 2 J α q 0 + ( σ 0 + 1 ) ( β + 1 ) } } ,

one gets the conclusion on the interval [ 0 , T ] .

Theorem 3 Let us assume the same hypothesis as in Theorem  2. We assume furthermore that 2 J α < 1 . Then there exists an unique solution to problem (22) that is global in time.

In order to prove the global result, we shall reiterate the above procedure on a series of intervals [ T i , T i + 1 ] i = 1 n 1 and we denote the value T found above by T 1 . The corresponding lengths of the intervals will be denoted by { l i } i = 1 n . We will also use, for convenience, the following notations: γ = | 1 I b | , k 1 = 2 J α and k 2 = ( σ 0 + 1 ) ( β + 1 ) . Using these notations, we have obtained that there exists a unique solution on B ( 0 , R 1 ) with
R 1 : = exp ( γ l 1 ) ( q 0 + k 2 k 1 )
on the interval [ 0 , T 1 ] , where T 1 is chosen such that
l 1 < min { T 0 , 1 exp ( γ l 1 ) ( β + 1 ) ( σ 0 + 2 ) , 1 exp ( γ l 1 ) { k 1 q 0 + k 2 } } .
(39)
Let us consider now the problem on the next interval, [ T 1 , T 2 ] with the initial condition q 01 ( θ ) = q ( T 1 , θ ) . We shall concentrate our attention on the third term in (39) since it explicitly depends on the initial condition. Then considering again the same application G, given by (35), and following the same computations, we obtain that
G ( m ) exp ( l 2 | 1 I b | ) ( q 01 + 1 2 J α { ( σ 0 + 1 ) ( β + 1 ) } ) .
Since
q 01 R 1 ,
we get that
G ( m ) exp ( γ ( l 1 + l 2 ) ) ( q 0 + 2 k 2 k 1 ) ,
and T 2 is chosen such that
l 2 exp { γ ( l 1 + l 2 ) } [ k 1 2 q 0 + k 1 k 2 + k 2 ] .
By induction, it follows that, for the n th interval, the following relations should hold:
R n = exp ( γ i = 1 n l i ) ( q 0 + n k 2 k 1 )
and
l n 1 exp { γ i = 1 n l i } [ k 1 n q 0 + k 2 i = 0 n 1 k 1 i ] .
(40)
In order to get this, we shall choose the time intervals such that
l n = c n ,

with c a positive constant to be specified later. By doing so, we obtain the result on the interval [ 0 , T n ] of length i = 1 n c i , and since the harmonic series is divergent, by making n , we will get the existence and uniqueness of the solution on [ 0 , ) . It remains to show that the inequality (40) holds.

Using
i = 1 n 1 i < 1 + log n ,
it follows that
exp { γ i = 1 n l i } exp { c γ } n c γ .
Also, for k 1 < 1
k 1 n < 1 , i = 0 n 1 k 1 i = 1 k 1 n 1 k 1 .
(41)
Then we can bound
1 exp { γ i = 1 n l i } [ k 1 n q 0 + k 2 i = 0 n 1 k 1 i ] 1 e c γ n c γ [ q 0 + k 2 / ( 1 k 1 ) ] ,
and choosing c such that c γ < 1 , it follows that for some N 0
1 e c γ n c γ [ q 0 + k 2 / ( 1 k 1 ) ] c n , for  n N 0 ,

which completes our proof.

We shall end with a remark regarding a special case of the nonlinear problem, for which the global result obtained in the linear case holds. Suppose that we consider the case of delayed average reception rate, i.e.,
σ ( t ) = σ 0 ( t ) + 0 t α ( t s ) q ( s , 2 π ) d s
and we assume that the delay function α is zero in a neighborhood of the origin [ 0 , τ ] . When integrating along the characteristics on the interval [ 0 , τ ] , the solution of our problem is given by the solution of the linear problem considered in the first subsection. Next, reiterating the procedure on the intervals [ k τ , ( k + 1 ) τ ] , k = 1 , 2 ,  , and having in mind that
k τ ( k + 1 ) τ α ( ( k + 1 ) τ s ) q ( s , 2 π ) d s = 0 τ α ( s ) q ( ( k + 1 ) τ s , 2 π ) d s = 0

and q is already calculated on the interval [ 0 , ( k 1 ) τ ] , one gets a global solution for this special case, which is given by the solution of the linear problem.

5 Numerical Results

In this section, we shall present some numerical simulations of our model obtained via a finite differences scheme. In order to validate the numerical results, we compare the simulations of our model with the simulations obtained via a Monte Carlo method applied to the theta-neuron model. To solve numerically (22), we write the first equation of the system in a conservative form
t q ( t , θ ) + θ J ( t , θ ) = 0 ,
with the flux J ( t , v ) given by
J ( t , θ ) = f ( θ ) q ( t , θ ) + σ ( t ) s ( θ ) θ q ( t , θ ˜ ) d θ ˜ : = F ( t , θ ) + I ( t , θ ) ,

where I ( t , θ ) stands for the integral part of the flux and F ( t , θ ) for the drift part of the flux.

Denoting by Δt the time step and by Δθ the phase step, we define
θ k = k Δ θ , t n = n Δ t , q k n = q ( t n , θ k ) , F k + 1 / 2 n = F ( t n , θ k + 1 / 2 ) , I k + 1 / 2 n = I ( t n , θ k + 1 / 2 ) , J k + 1 / 2 n = J ( t n , θ k + 1 / 2 ) .
For the discretization of (22), we use a first-order explicit in time scheme given by
q k n + 1 = q k n Δ t Δ θ ( J k + 1 / 2 n J k 1 / 2 n ) = q k n Δ t Δ θ ( F k + 1 / 2 n F k 1 / 2 n ) Δ t Δ θ ( I k + 1 / 2 n I k 1 / 2 n ) .

The drift numerical flux F k + 1 / 2 n was reconstructed by using the upwind method (see [25] for details of the upwind numerical reconstruction) and the integral part I k + 1 / 2 n was approximated by using a first order reconstruction.

The simulations of the model (22) presented in Fig. 5 show the evolution in time of the phase distribution of the neural population. The blue curve in the plots corresponds to the Monte Carlo simulation and the black curve to the finite differences scheme discretization of (22). In the first plot, upper left of Fig. 5, the initial repartition q 0 , which is a truncated Gaussian, is represented. Under the influence of the external impulses σ 0 ( t ) , the jump process present in the model (22) takes place. The density seems to reach an equilibrium that is shown in the last plot of Fig. 5. Unfortunately, we have not proved theoretically the existence of a steady state, which is subject to our future research. In Fig. 6, we show the evolution in time of the firing rate of the population r ( t ) under a constant external influence and in Fig. 7 the firing rate of the population under an oscillatory external influence. Again, it can be noticed that under a constant influence, the firing rate seems to converge toward a steady state.
Fig. 5

Simulations of the model (22). The figures give the repartition of the population at different instants in time: t = 0 , t = 0.1 , t = 0.5 , t = 0.6 . For each instant, the two plots in the figures represent the same density q ( t , θ ) obtained by two different methods: the Monte Carlo method—blue curve, and the finite differences scheme approach for the model (22)—black curve. The initial Gaussian repartition q 0 is represented in the upper left plot. The simulations have been obtained for a constant external influence σ 0 = 20 , the potential jump size— h = 5 , the coupling parameter— J = 3 and the basic current— I b = 1 . The last figure shows the repartition at the final time t = 3 , which evidences a convergence toward an equilibrium repartition

Fig. 6

Simulation of the evolution in time of the firing rate r ( t ) for a constant external influence σ 0 = 20 ; the potential jump size is h = 5 , the coupling parameter— J = 3 and the basic current— I b = 1 . The initial condition was taken as a Gaussian repartition

Fig. 7

Simulations of the model in the case when the initial condition is a Gaussian. The figures represent the evolution in time of the firing rate r ( t ) of the model for a non constant external influence σ 0 ( t ) = I 0 + I 1 sin ( ω t ) . In the first plot I 0 = 10 , I 1 = 10 , ω = 2 , in the second plot I 0 = 10 , I 1 = 10 , ω = 5 and in the third plot I 0 = 10 , I 1 = 10 , ω = 10 . The potential jump size is h = 5 , the coupling parameter— J = 3 and the basic current— I b = 1

6 Discussion and Conclusion

Single neuron models such as the LIF or the QIF models have a weak electrophysiological basis, but thanks to their simplicity, they are quite useful for simulations of the behavior of populations of neurons. The population density approach leading to partial differential equations is suited for very large populations of neurons; we think that mathematical studies on the qualitative behavior of population density models may help for the choice of the particular single neuron model used to describe the internal state of the neurons of the population, and give insights on the results. In particular, the possibility of burst of the firing rate corresponding to a synchronization of the neurons, as opposed to a regular activity, is of interest to neuroscientists.

We have highlighted a qualitative difference between the population density approach applied to a population of theta-neurons, and the same approach applied to populations of LIF neurons. In [11], it was proved that a global solution exists for the LIF population density approach equation with no delay and the firing rate remains bounded in the case where J < 1 . On the other hand, in [14] it was shown that for J and σ 0 large enough, for any initial condition there will be a burst in finite time: The firing rate goes to infinity.

In the present study of populations of theta-neurons, we consider only the case with conduction delay. The condition for existence of a global solution (no burst) involves the product of the number of connections J and the maximum of the delay repartition α . In order to satisfy this condition for large J, the delay kernel α should spread over the time interval in order to decrease its maximum. So, it is possible to exhibit populations with the same J that will burst in finite time with the LIF model but will have a regular behavior on an infinite horizon with the QIF model with a different delay repartition.

As it is known, the formal threshold imposed in the LIF model is defined as the value at which an action potential is initiated, and the firing rate of the population density models in this case is defined as the flux passing through this threshold. In our case, the neurons of the population are supposed to transmit the electrical signal at the peak value, instead of the value at which the initiation of a spike occurs, which is actually the θ + root of the model (19). Therefore, the firing rate in our model depends only on the drifting flux through the phase 2π. This fact allowed us to obtain a global well-posedness result in the general case of the model. But the same fact does not allow to use the same argument as in [14] to study the conditions of bursting. Furthermore, in all the simulations that we have done, the synchronization phenomenon have not been observed in the case of a theta-neurons population.

Notes

Declarations

Acknowledgements

The second and third authors were members of the project LEA Math-Mode Projet Franco-Roumain. The first author has been financially supported by Conseil Régional d’Aquitaine.

Authors’ Affiliations

(1)
Institut de Mathématiques de Bordeaux, Université Bordeaux 1
(2)
Physics Department, University of Ottawa
(3)
INRIA Bordeaux Sud Ouest
(4)
Department of Sciences, Al. I Cuza University

References

  1. Wilbur W, Rinzel J: A theoretical basis for large coefficient of variation and bimodality in neuronal interspike interval distributions. J Theor Biol 1983, 105(2):345–368. 10.1016/S0022-5193(83)80013-7View ArticleGoogle Scholar
  2. Kuramoto Y: Collective synchronization of pulse-coupled oscillators and excitable units. Physica D, Nonlinear Phenom 1991, 50: 15–30. 10.1016/0167-2789(91)90075-KMATHView ArticleGoogle Scholar
  3. Abbott LF, van Vreeswijk C: Asynchronous states in networks of pulse-coupled oscillators. Phys Rev E 1993, 48: 1483–1490.View ArticleGoogle Scholar
  4. Knight B, Manin D, Sirovich L: Dynamical models of interacting neuron populations in visual cortex. Robot Cybern 1996, 54: 4–8.Google Scholar
  5. Knight BW: Dynamics of encoding in neuron populations: some general mathematical features. Neural Comput 2000, 12: 473–518. 10.1162/089976600300015673View ArticleGoogle Scholar
  6. Omurtag A, Knight B, Sirovich L: On the simulation of large population or neurons. J Comput Neurosci 2000, 8: 51–63. 10.1023/A:1008964915724MATHView ArticleGoogle Scholar
  7. Nykamp DQ, Tranchina D: A population density approach that facilitates large-scale modeling of neural networks: analysis and an application to orientation tuning. J Comput Neurosci 2000, 8: 19–50. 10.1023/A:1008912914816MATHView ArticleGoogle Scholar
  8. Apfaltrer F, Ly C, Tranchina D: Population density methods for stochastic neurons with realistic synaptic kinetics: firing rate dynamics and fast computational methods. Netw Comput Neural Syst 2006, 17: 373–418. 10.1080/09548980601069787View ArticleGoogle Scholar
  9. Cai D, Tao L, Rangan A, McLaughlin D: Kinetic theory for neuronal network dynamics. Commun Math Sci 2006, 4: 97–127. 10.4310/CMS.2006.v4.n1.a4MATHMathSciNetView ArticleGoogle Scholar
  10. Gerstner W, Kistler W: Spiking Neuron Models. Cambridge University Press, Cambridge; 2002.MATHView ArticleGoogle Scholar
  11. Dumont G, Henry J: Population density models of integrate-and-fire neurons with jumps: well-posedness. J Math Biol 2012.Google Scholar
  12. Sirovich L, Omurtag A, Knight B: Dynamics of neuronal populations: the equilibrium solution. J Appl Math 2000, 60: 2009–2028.MATHMathSciNetGoogle Scholar
  13. Sirovich L, Omurtag A, Lubliner K: Dynamics of neural populations: stability synchrony. Netw Comput Neural Syst 2006, 17: 3–29. 10.1080/09548980500421154View ArticleGoogle Scholar
  14. Dumont G, Henry J: Synchronization of an excitatory integrate-and-fire neural network. Bull Math Biol 2013, 75(4):629–648. 10.1007/s11538-013-9823-8MATHMathSciNetView ArticleGoogle Scholar
  15. Garenne A, Henry J, Tarniceriu O: Analysis of synchronization in a neural population by a population density approach. Math Model Nat Phenom 2010, 15: 5–25.MathSciNetView ArticleGoogle Scholar
  16. Ermentrout GB, Kopell N: Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J Appl Math 1986, 46: 233–253. 10.1137/0146017MATHMathSciNetView ArticleGoogle Scholar
  17. Latham P, Richmond B, Nelson P, Nirenberg S: Intrinsic dynamics in neuronal networks. I. Theory. J Neurophysiol 2000, 83: 808–827.Google Scholar
  18. Izhikevich EM: Dynamical Systems in Neuroscience. MIT Press, Cambridge; 2007.Google Scholar
  19. Ermentrout B: Ermentrout–Kopell canonical model. Scholarpedia 2008. 10.4249/scholarpedia.1398Google Scholar
  20. Eftimie R, de Vries G, Lewis MA: Weakly nonlinear analysis of a hyperbolic model for animal group formation. J Math Biol 2009, 59: 37–74. 10.1007/s00285-008-0209-8MATHMathSciNetView ArticleGoogle Scholar
  21. Fourcaud N, Brunel N: Dynamics of the firing rate probability of noisy integrate and fire neurons. Neural Comput 2002, 14: 2057–2110. 10.1162/089976602320264015MATHView ArticleGoogle Scholar
  22. Brunel N, Latham P: Firing rate of noisy quadratic integrate-and-fire neurons. Neural Comput 2003, 15: 2281–2306. 10.1162/089976603322362365MATHView ArticleGoogle Scholar
  23. Fourcaud-Trocme N, Hansel D, van Vreeswijk C, Brunel N: How spike generation mechanisms determine the neuronal response to fluctuating inputs. J Neurosci 2003, 23(37):11628–11640.Google Scholar
  24. McKennoch S, Voegtlin T, Bushnell L: Spike-timing error backpropagation in theta neuron networks. Neural Comput 2009, 21: 9–45. 10.1162/neco.2009.09-07-610MATHMathSciNetView ArticleGoogle Scholar
  25. LeVeque RJ: Numerical Methods for Conservation Laws. Birkhäuser, Basel; 1992.MATHView ArticleGoogle Scholar

Copyright

© Dumont et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.