- Open Access
Multiscale analysis of slow-fast neuronal learning models with noise
© M. Galtier, G. Wainrib; licensee Springer 2012
- Received: 19 April 2012
- Accepted: 26 October 2012
- Published: 22 November 2012
This paper deals with the application of temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning. Three time-scales arise for these models: (i) the fast neuronal dynamics, (ii) the intermediate external input to the system, and (iii) the slow learning mechanisms. Based on this time-scale separation, we apply an extension of the mathematical theory of stochastic averaging with periodic forcing in order to derive a reduced deterministic model for the connectivity dynamics. We focus on a class of models where the activity is linear to understand the specificity of several learning rules (Hebbian, trace or anti-symmetric learning). In a weakly connected regime, we study the equilibrium connectivity which gathers the entire ‘knowledge’ of the network about the inputs. We develop an asymptotic method to approximate this equilibrium. We show that the symmetric part of the connectivity post-learning encodes the correlation structure of the inputs, whereas the anti-symmetric part corresponds to the cross correlation between the inputs and their time derivative. Moreover, the time-scales ratio appears as an important parameter revealing temporal correlations.
- slow-fast systems
- stochastic differential equations
- inhomogeneous Markov process
- model reduction
- recurrent networks
- unsupervised learning
- Hebbian learning
Complex systems are made of a large number of interacting elements leading to non-trivial behaviors. They arise in various areas of research such as biology, social sciences, physics or communication networks. In particular in neuroscience, the nervous system is composed of billions of interconnected neurons interacting with their environment. Two specific features of this class of complex systems are that (i) external inputs and (ii) internal sources of random fluctuations influence their dynamics. Their theoretical understanding is a great challenge and involves high-dimensional non-linear mathematical models integrating non-autonomous and stochastic perturbations.
Modeling these systems gives rise to many different scales both in space and in time. In particular, learning processes in the brain involve three time-scales: from neuronal activity (fast), external stimulation (intermediate) to synaptic plasticity (slow). Here, fast time-scale corresponds to a few milliseconds and slow time-scale to minutes/hour, and intermediate time-scale generally ranges between fast and slow scales, although some stimuli may be faster than neuronal activity time-scale (e.g., submilliseconds auditory signals ). The separation of these time-scales is an important and useful property in their study. Indeed, multiscale methods appear particularly relevant to handle and simplify such complex systems.
First, stochastic averaging principle [2, 3] is a powerful tool to analyze the impact of noise on slow-fast dynamical systems. This method relies on approximating the fast dynamics by its quasi-stationary measure and averaging the slow evolution with respect to this measure. In the asymptotic regime of perfect time-scale separation, this leads to a slow reduced system whose analysis enables a better understanding of the original stochastic model.
Second, periodic averaging theory , which has been originally developed for celestial mechanics, is particularly relevant to study the effect of fast deterministic and periodic perturbations (external input) on dynamical systems. This method also leads to a reduced model where the external perturbation is time-averaged.
where represents the fast activity of the individual elements, represents the connectivity weights that vary slowly due to plasticity, and represents the value of the external input at time t. Random perturbations are included in the form of a diffusion term, and is a standard Brownian motion.
where is constructed as an average of G with respect to a specific probability measure, as explained in Section 2.
This paper first introduces the appropriate mathematical framework and then focuses on applying these multiscale methods to learning neural networks.
The individual elements of these networks are neurons or populations of neurons. A common assumption at the basis of mathematical neuroscience  is to model their behavior by a stochastic differential equation which is made of four different contributions: (i) an intrinsic dynamics term, (ii) a communication term, (iii) a term for the external input, and (iv) a stochastic term for the intrinsic variability. Assuming that their activity is represented by the fast variable , the first equation of system (1) is a generic representation of a neural network (function F corresponds to the first three terms contributing to the dynamics). In the literature, the level of non-linearity of the function F ranges from a linear (or almost-linear) system to spiking neuron dynamics , yet the structure of the system is universal.
These neurons are interconnected through a connectivity matrix which represents the strength of the synapses connecting the real neurons together. The slow modification of the connectivity between the neurons is commonly thought to be the essence of learning. Unsupervised learning rules update the connectivity exclusively based on the value of the activity variable. Therefore, this mechanism is represented by the slow equation above, where is the connectivity matrix and G is the learning rule. Probably the most famous of these rules is the Hebbian learning rule introduced in . It says that if both neurons A and B are active at the same time, then the synapses from A to B and B to A should be strengthened proportionally to the product of the activity of A and B. There are many different variations of this correlation-based principle which can be found in [10, 11]. Another recent, unsupervised, biologically motivated learning rule is the spike-timing-dependent plasticity (STDP) reviewed in . It is similar to Hebbian learning except that it focuses on causation instead of correlation and that it occurs on a faster time-scale. Both of these types of rule correspond to G being quadratic in v.
Previous literature about dynamic learning networks is thick, yet we take a significantly different approach to understand the problem. An historical focus was the understanding of feedforward deterministic networks [13–15]. Another approach consisted in precomputing the connectivity of a recurrent network according to the principles underlying the Hebbian rule . Actually, most of current research in the field is focused on STDP and is based on the precise times of the spikes, making them explicit in computations [17–20]. Our approach is different from the others regarding at least one of the following points: (i) we consider recurrent networks, (ii) we study the evolution of the coupled system activity/connectivity, and (iii) we consider bounded dynamical systems for the activity without asking them to be spiking. Besides, our approach is a rigorous mathematical analysis in a field where most results rely heavily on heuristic arguments and numerical simulations. To our knowledge, this is the first time such models expressed in a slow-fast SDE formalism are analyzed using temporal averaging principles.
The purpose of this application is to understand what the network learns from the exposition to time-dependent inputs. In other words, we are interested in the evolution of the connectivity variable, which evolves on a slow time-scale, under the influence of the external input and some noise added on the fast variable. More precisely, we intend to explicitly compute the equilibrium connectivities of such systems. This final matrix corresponds to the knowledge the network has extracted from the inputs. Although the derivation of the results is mathematically tough for untrained readers, we have tried to extract widely understandable conclusions from our mathematical results and we believe this paper brings novel elements to the debate about the role and mechanisms of learning in large scale networks.
Although the averaging method is a generic principle, we have made significant assumptions to keep the analysis of the averaged system mathematically tractable. In particular, we will assume that the activity evolves according to a linear stochastic differential equation. This is not very realistic when modeling individual neurons, but it seems more reasonable to model populations of neurons; see Chapter 11 of .
The paper is organized as follows. Section 2 is devoted to introducing the temporal averaging theory. Theorem 2.2 is the main result of this section. It provides the technical tool to tackle learning neural networks. Section 3 corresponds to application of the mathematical tools developed in the previous section onto the models of learning neural networks. A generic model is described and three different particular models of increasing complexity are analyzed. First, Hebbian learning, then trace-learning, and finally STDP learning are analyzed for linear activities. Finally, Section 4 is a discussion of the consequences of the previous results from the viewpoint of their biological interpretation.
In this section, we present multiscale theoretical results concerning stochastic averaging of periodically forced SDEs (Section 2.3). These results combine ideas from singular perturbations, classical periodic averaging and stochastic averaging principles. Therefore, we recall briefly, in Sections 2.1 and 2.2, several basic features of these principles, providing several examples that are closely related to the application developed in Section 3.
2.1 Periodic averaging principle
We present here an example of a slow-fast ordinary differential equation perturbed by a fast external periodic input. We have chosen this example since it readily illustrates many ideas that will be developed in the following sections. In particular, this example shows how the ratio between the time-scale separation of the system and the time-scale of the input appears as a new crucial parameter.
In this system, one can distinguish various asymptotic regimes when and are small according to the asymptotic value of μ:
Regime 1: Slow input :
Regime 2: Fast input :
and when , one does not recover the same asymptotic behavior as in Regime 1.
Regime 3: Time-scales matching :
the two limits and do not commute,
the ratio μ between the internal time-scale separation and the input time-scale is a key parameter in the study of slow-fast systems subject to a time-dependent perturbation.
2.2 Stochastic averaging principle
with initial conditions , , and where is called the slow variable, is the fast variable, with F, G, Σ smooth functions ensuring the existence and uniqueness for the solution , and a p-dimensional standard Brownian motion, defined on a filtered probability space . Time-scale separation in encoded in the small parameter ϵ, which denotes in this section a single positive real number.
and w the solution of with the initial condition . Under some dissipativity assumptions, the stochastic averaging principle  states:
As a consequence, analyzing the behavior of the deterministic solution w can help to understand useful features of the stochastic process .
Interestingly, the asymptotic behavior of for small ϵ is characterized by a deterministic trajectory that depends on the strength σ of the noise applied to the system. Thus, the stochastic averaging principle appears particularly interesting when unraveling the impact of noise strength on slow-fast systems.
Many other results have been developed since, extending the set-up to the case where the slow variable has a diffusion component or to infinite-dimensional settings for instance, and also refining the convergence study, providing homogenization results concerning the limit of or establishing large deviation principles (see  for a recent monograph). However, fewer results are available in the case of non-homogeneous SDEs, that is, when the system is perturbed by an external time-dependent signal. This setting is of particular interest in the framework of stochastic learning models, and we present the main relevant mathematical results in the following section.
2.3 Double averaging principle
with a τ-periodic function and . Parameter represents the internal time-scale separation and the input time-scale. We consider the case where both and are small, that is, a strong time-scale separation between the fast variable and the slow one , and a fast periodic modulation of the fast drift .
We further denote .
Accordingly, we denote the distinguished limit when , with .
The following assumption is made to ensure existence and uniqueness of a strong solution to system (4). In the following, will denote the usual scalar product for vectors.
- (i)The functions F, G, and Σ are locally Lipschitz continuous in the space variable z. More precisely, for any , there exists a constant such that
- (ii)There exists a constant such that
To control the asymptotic behavior of the fast variable, one further assumes the following.
- (i)The diffusion matrix Σ is bounded
- (ii)There exists such that for all and for all ,
According to the value of , the stochastic averaging principle is based on a description of the asymptotic behavior of various rescaled fast frozen processes. More precisely, under Assumptions 2.1 and 2.2, one can deduce that:
For any fixed and fixed, the law of the rescaled time-homogeneous frozen process,
converges exponentially fast to a unique invariant probability measure denoted by .
For any fixed , there exists a -periodic evolution system of measures , different from above, such that the law of the rescaled time-inhomogeneous frozen process,(6)
converges exponentially fast towards , uniformly with respect to (cf. the Appendix Theorem A.1).
For any fixed , the law of the rescaled time-homogeneous frozen process,
where , converges exponentially fast towards a unique invariant probability measure denoted by .
According to the value of μ, we introduce a vector field which will play a role similar to introduced in equation (2).
Notation We may denote the periodic system of measures associated with (6) by to emphasize its relationship with F and Σ. Accordingly, we may denote by .
We are now able to present our main mathematical result. Extending Theorem 2.1, the following theorem describes the asymptotic behavior of the slow variable when with . We refer to  for more details about the full mathematical proof of this result.
The extremal cases and are not covered in full rigor by Theorem 2.2. However, the study of the sequential limits followed by or followed by can be deduced from an appropriate combination of classical periodic and stochastic averaging theorems:
Slow input: If one considers the case where the limit is taken first, so that from Theorem 2.1 with fast variable and slow variables and t (with the trivial equation ), is close in probability on finite time-intervals to the solution of the following inhomogeneous ordinary differential equation:
Fast input: If the limit is taken first, one first has to perform a classical averaging of the periodic drift leading to the homogeneous system of SDEs (4), but with instead of . Then, an application of Theorem 2.1 on this system gives an averaged vector field
To study the extremal cases and in full generality, one would need to consider all the possible relationships between and , not only the linear one as in the present article, but also of the type for example. In this case, we believe that the regime converges to the same limit as taking the limit first and the regime corresponds to taking the limit first. The intermediate regime seems to be the only one for which the limit cannot be obtained by combining classical averaging principles. Therefore, the present article is focused on this case, in which the averaged system depends explicitly on the scaling parameter μ. Moreover, in terms of applications, this parameter can have a relatively easy interpretation in terms of the ratio of time-scales between intrinsic neuronal activity and typical stimulus time-scales in a given situation. Although the zeroth order limit (i.e., the averaged system) seems to depend only on the position of α with respect to 1, it seems reasonable to expect that the fluctuations around the limit would depend on the precise value of α. This is a difficult question which may deserve further analysis.
- 3.By a rescaling of the frozen process (6), one deduces the following scaling relationships:
It seems reasonable to expect that the above result is still valid when considering ergodic, but not necessarily periodic, time dependency of the function . In equation (7), instead of integrating over one period, one should integrate it with respect to an ergodic stationary measure. However, this extension requires non-trivial technical improvements of  which are beyond the scope of this paper.
2.3.1 Case of a fast linear SDE with periodic input
with initial condition , and where is a matrix whose eigenvalues have positive real parts and is a τ-periodic function.
whose stationary distribution is known to be a centered Gaussian measure with the covariance matrix Q solution of (9); see Chapter 3.2 of . Notice that if A is self-adjoint with respect to (i.e., ), then the solution is , which will be used in Appendix B.2.
where is the probability density function of the Gaussian law with mean and covariance .
Therefore, due to the linearity of the fast SDE, the periodic system of measure ν is just a constant Gaussian distribution shifted by a periodic function of time . In case G is quadratic in v, this remark implies that one can perform independently the integral over time and over in formula (10) (noting that the crossed term has a zero average). In this case, contributions from the periodic input and from noise appear in the averaged vector field in an additive way.
As in Example 2.1 and as shown above, the behavior of this system when both and are small depends on the parameter μ defined in (5). More precisely, we have the following three regimes:
Regime 1: slow input:
Regime 2: fast input:
Regime 3: time-scale matching:
2.4 Truncation and asymptotic well-posedness
In some cases, Assumptions 2.1-2.2 may not be satisfied on the entire phase space , but only on a subset. Such situations will appear in Section 3 when considering learning models. We introduce here a more refined set of assumptions ensuring that Theorem 2.2 still applies.
with , , , .
For the fast drift to be non-explosive, it is necessary to have with for all time. The concern about this system comes from the fact that the slow variable w may reach l due to the fluctuations captured in the term , for instance, if κ is not large enough. Such a system may have exponentially growing trajectories. However, we claim that for small enough ϵ, will remain close to its averaged limit w for a very long time, and if this limit remains below , then can be considered as well-posed in the asymptotic limit . To make this argument more rigorous, we suggest the following definition.
a unique solution exists until a random time
- 2.for all ,
We give in the following proposition sufficient conditions for system (4) to be asymptotically well posed in probability and to satisfy conclusions of Theorem 2.2.
Let us introduce the following set of additional assumptions.
- (i)There exists such that
- (ii)For any and any bounded subset K of ,
Remark 2.2 This last set of assumptions will be satisfied in all the applications of Section 3 since we consider linear models with additive noise for the equation of v, implying this variable to be Gaussian and the function G only involves quadratic moments of v; therefore, the moment conditions (i) and (ii) will be satisfied without any difficulty. Moreover, if one considers non-linear models for the variable v, then the Gaussian property may be lost; however, adding sigmoidal non-linearity has, in general, the effect of bounding the dynamics, thus making these moment assumptions reasonable to check in most models of interest.
The functions F, G, Σ satisfy Assumptions 2.1-2.3 restricted on .
ℰ is invariant under the flow of , as defined in (7).
Then for any initial condition , system (4) is asymptotically well posed in probability and satisfies the conclusion of Theorem 2.2.
Proof See Appendix A.2. □
that is, the subset is invariant under the flow of .
and that is stable whereas is unstable. Thus, if with , then for all . In fact, the invariance property is true for all .
In this section, we apply the temporal averaging methods derived in Section 2 on models of unsupervised learning neural networks. First, we design a generic learning model and show that one can define formally an averaged system with equation (7). However, going beyond the mere definition of the averaged system seems very difficult and we only manage to get explicit results for simple systems where the fast activity dynamics is linear. In the three last subsections, we push the analysis for three examples of increasing complexity.
In the following, we always consider that the initial connectivity is 0. This is an arbitrary choice but without consequences, because we focus on the regime where there is a single globally stable equilibrium point (see Section 3.2.3).
3.1 A generic learning neural network
We now introduce a large class of stochastic neuronal networks with learning models. They are defined as coupled systems describing the simultaneous evolution of the activity of neurons and the connectivity between them. We define , the activity field of the network, and , the connectivity matrix.
where the function characterizes the intrinsic non-linear dynamical behavior of neuron i and is the input received by neuron i. The stochastic term is added to account for internal sources of noise. In terms of notations, is a standard n-dimensional Brownian motion, Σ is an matrix, possibly function of v or other variables, and denotes the i th component of the vector .
where and ℋ is a function taking the history of and and returning a real for each time t (to take convolutions into account). In practical cases, they are often taken to be sigmoidal functions. We abusively redefine and ℋ as vector valued operators corresponding to the element-wise application of their real counterparts. We also define the function such that . Together with a slow generic learning rule, this leads to defining a stochastic learning model as the following system of SDEs.
Before applying the general theory of Section 2, let us make several comments about this generic model of neural network with learning. This model is a non-autonomous, stochastic, non-linear slow-fast system.
In order to apply Theorem 2.2, one needs Assumptions 2.1, 2.2, and 2.3 to be satisfied, restricting the space of possible functions , ℋ, ℱ, Σ, and G. In particular, Assumption 2.2(ii) seems rather restrictive since it excludes systems with multiple equilibria and suggests that the general theory is more suited to deal with rate-based networks. However, one should keep in mind that these assumptions are only sufficient, and that the double averaging principle may work as well in systems which do not satisfy readily those assumptions.
As we will show in Section 3.3, a particular form of history-dependence can be taken into account, to a certain extent. Indeed, for instance, if the function ℱ is actually a functional of the past trajectory of variable which can be expressed as the solution of an additional SDE, then it may be possible to include a certain form of history-dependence. However, purely time-delayed systems do not enter the scope of this theory, although it might be possible to derive an analogous averaging method in this framework.
The noise term can be purely additive or set by a particular function as long as it satisfies Assumption 2.2(i), meaning that it must be uniformly non-degenerate.
In the following subsection, we apply the averaging theory to various combinations of neuronal network models, embodied by choices of functions , ℋ, ℱ, Σ, and various learning rules, embodied by a choice of the function G. We will also analyze the obtained averaged system, describing the slow dynamics of the connectivity matrix in the limit of perfect time-scale separation and, in particular, study the convergence of this averaged system to an equilibrium point.
3.2 Symmetric Hebbian learning
One of the simplest, yet non-trivial, stochastic learning models is obtained when considering
A linear model for neuronal activity, namely with l a positive constant.
A linear model for the synaptic transmission, namely and .
A constant diffusion matrix Σ (additive noise) proportional to the identity (spatially uncorrelated noise).
A Hebbian learning rule with linear decay, namely . Actually, it corresponds to the tensor product: .
where neurons are assumed to have the same decay constant: ; u is a periodic continuous input (it replaces in the previous section); with and is n-dimensional Brownian noise.
The first question that arises is about the well-posedness of the system: What is the definition interval of the solutions of system (12)? Do they explode in finite time? At first sight, it seems there may be a runaway of the solution if the largest real part among the eigenvalues of W grows bigger than l. In fact, it turns out this scenario can be avoided if the following assumption linking the parameters of the system is satisfied.
It corresponds to making sure the external (i.e., ) or internal (i.e., σ) excitations are not too large compared to the decay mechanism (represented by κ and l). Note that if , and d are fixed, it is sufficient to increase κ or l for this assumption to be satisfied.
is invariant by the flow of the averaged system , where means W is semi-definite positive and means is definite positive. Therefore, the averaged system is defined and bounded on . The slow/fast system being asymptotically close to the averaged system, it is therefore asymptotically well-defined in probability. This is summarized in the following theorem.
where is the -periodic attractor of , where is supposed to be fixed.
Proof See Theorem B.1 in Appendix B.2. □
In the following, we focus on the averaged system described by (13). Its right-hand side is made of three terms: a linear and homogeneous decay, a correlation term, and a noise term. The last two terms are made explicit in the following.
3.2.1 Noise term
As seen in Section 2, in the linear case, the noise term Q is the unique solution of the Lyapunov equation (9) with and . Because the noise is spatially uncorrelated and identical for each neuron and also because the connectivity is symmetric, observe that is the unique solution of the system.
In more complicated cases, the computation of this term appears to be much more difficult as we will see in Section 3.4.
3.2.2 Correlation term
This term corresponds to the auto-correlation of neuronal activity. It is only implicitly defined; thus, this section is devoted to finding an explicit form depending only on the parameters l, μ, τ, the connectivity W, and the inputs u. Actually, one can perform an expansion of this term with respect to a small parameter corresponding to a weakly connected expansion. Most terms vanish if the connectivity W is small compared to the strength of the intrinsic decaying dynamics of neurons l.
With this notation, it is simple to think of v as a ‘semi-continuous matrix’ of . Hence, the operator ‘⋅’ can be though of as a matrix multiplication. Similarly, the transpose operator turns a matrix into a matrix . See Appendix B.1 for details about the notations.
It is common knowledge, see  for instance, that this term gathers information about the correlation of the inputs. Indeed, if we assume that the input is sufficiently slow, then has enough time to converge to for all . Therefore, in the first order . This leads to . In the weakly connected regime, one can assume that leading to which is the auto-correlation of the inputs.
Actually, without the assumption of a slow input, lagged correlations of the input appear in the averaged system. Before giving the expression of these temporal correlations, we need to introduce some notations. First, define the convolution filter , where H is the Heaviside function. This family of functions is displayed for different values of in Figure 4(a). Note that when , where is the Dirac distribution centered at the origin. In this asymptotic regime, the convolution filter and its iterates are equal to the identity.
Observe that . Therefore, . Thanks to Young’s inequality for convolutions, which says that , it can be proved that .
We intend to express the correlation term as an infinite converging sum involving these filtered correlations. In this perspective, we use a result we have proved in  to write the solution of a general class of non-autonomous linear systems (e.g., ) as an infinite sum, in the case .
Proof See Lemma B.2 in Appendix B.2. □
This is a decomposition of the solution of a linear differential system on the basis of operators where the spatial and temporal parts are decoupled. This important step in a detailed study of the averaged equation cannot be achieved easily in models with non-linear activity. Everything is now set up to introduce the explicit expansion of the correlation we are using in what follows. Indeed, we use the previous result to rewrite the correlation term as follows.
Proof See Theorem B.3 in Appendix B.2. □
This infinite sum of convolved filters is reminiscent of a property of Hawkes processes that have a linear input-output gain .
The speed of inputs characterized by μ only appears in the temporal profiles . In particular, if the inputs are much slower than neuronal activity time-scale, i.e., , then and . Therefore, and the sums in the formula of Property 3.3 are separable, leading to , which corresponds to the heuristic result previously explained.
3.2.3 Global stability of the equilibrium point
Now that we have found an explicit formulation for the averaged system, it is natural to study its dynamics. Actually, we prove in the following that if the connectivity W is kept smaller than , i.e., Assumption 3.1 is verified with , then the dynamics is trivial: the system converges to a single equilibrium point. Indeed, under the previous assumption, the system can be written , where F is a contraction operator on . Therefore, one can prove the uniqueness of the fixed point with the Banach fixed point argument and exhibit an energy function for the system.
Theorem 3.4 If Assumption 3.1 is verified for , then there is a unique equilibrium point in the invariant subset which is globally, asymptotically stable.
Proof See Theorem B.4 in Appendix B.2. □
The fact that the equilibrium point is unique means that the ‘knowledge’ of the network about its environment (corresponding by hypothesis to the connectivity) eventually is unique. For a given input and any initial condition, the network can only converge to the same ‘knowledge’ or ‘understanding’ of this input.
3.2.4 Explicit expansion of the equilibrium point
When the network is weakly connected, the high-order terms in expansion (15) may be neglected. In this section, we follow this idea and find an explicit expansion for the equilibrium connectivity where the strength of the connectivity is the small parameter enabling the expansion. The weaker the connectivity, the more terms can be neglected in the expansion.
In the asymptotic regime , we have . This index is the ‘small’ parameter needed to perform the expansion. We also define , which has information about the way is converging to zero. In fact, it is the ratio of the two terms of .
With these, we can prove that the equilibrium connectivity has the following asymptotic expansion in .
Proof See Theorem B.5 in Appendix B.2. □
Not only the spatial correlation is encoded in the weights, but there is also some information about the temporal correlation, i.e., two successive but orthogonal events occurring in the inputs will be wired in the connectivity although they do not appear in the spatial correlations; see Figure 3 for an example.
3.3 Trace learning: band-pass filter effect
In this section, we study an improvement of the learning model by adding a certain form of history dependence in the system and explain the way it changes the results of the previous section. Given that Theorem 2.2 only applies to an instantaneous process, we will only be able to treat the history-dependent systems which can be reformulated as instantaneous processes. Actually, this class of systems contains models which are biologically more relevant than the previous model and which will exhibit interesting additional functional behaviors. In particular, this covers the following features:
where ∗ is the convolution and . Rolls and Deco numerically show  that the temporal convolution, leading to a spatio-temporal learning, makes it possible to perform invariant object recognition. Besides, trace learning appears to be the symmetric part of the biological STDP rule that we detail in Section 3.4.
Damped oscillatory neurons.
The electro-chemical process of synaptic communication is very complicated and non-linear. Yet, one of the features of synaptic communication we can take into account in a linear model is the shape of the post-synaptic potentials. In this section, we consider that each synapse is a linear filter whose finite impulse response (i.e., the post-synaptic potential) has the shape . This is a common assumption which, for instance, is at the basis of traditional rate based models; see Chapter 11 of .
where the notations are the same as in Section 3.2. The behavior of a single neuron will be oscillatory damped if is a pure imaginary number, i.e., . This is the regime on which we focus. Actually, the Hebbian linear case of Section 3.2 corresponds to in this delayed system.
This trick makes it possible to deal with some history-based processes where the dependence on the past is exponential.
where . Observe that applying Young’s inequality to convolutions leads to . Actually, Lemma C.3 shows that , where is the Bessel function of the first kind. The value of the L1 norm of v is computed in Appendix C.3. It leads to if Δ is a pure imaginary number and else.
As before, the existence and uniqueness of a globally attractive equilibrium point is guaranteed if Assumption 3.1 is verified for ; see Theorem B.9.
3.4 Asymmetric ‘STDP’ learning with correlated noise
where the non-linear intrinsic dynamics of the neurons is represented by f. Indeed, the term is negligible when the neuron is quiet and maximal at the top of the spikes emitted by neuron i. Therefore, it records the value of the pre-synaptic membrane potential weighted by the function when the post-synaptic neuron spikes. This accounts for the positive part of Figure 6. Similarly, the negative part corresponds to .
Actually, this formulation is valid for any non-linear activity with correlated noise. However, studying the role of STDP in spiking networks is beyond the scope of this paper since we are only able to have explicit results for models with linear activity. Therefore, we will assume that the activity is linear while keeping the learning rule as it was derived in the spiking case, i.e., we assume in the system above.
In this framework, the method exposed in Section 3.2 holds with small changes. First, the well-posedness assumption becomes
where is the maximal eigenvalue of .
According to Theorem B.13, the system is also globally asymptotically convergent to a single equilibrium, which we study in the following.
Therefore, the STDP learning rule simply adds an antisymmetric part to the final connectivity keeping the symmetric part as the Hebbian case. Besides, the antisymmetric part corresponds to computing the cross-correlation of the inputs with its derivative. For high-order terms, this remains true although the temporal profiles are different from the first order. These results are in line with previous works underlying the similarity between STDP learning and differential Hebbian learning, where ; see .
We have applied temporal averaging methods on slow/fast systems modeling the learning mechanisms occurring in linear stochastic neural networks. When we make sure the connectivity remains small, the dynamics of the averaged system appears to be simple: the connectivity always converges to a unique equilibrium point. Then, we performed a weakly connected expansion of this final connectivity whose terms are combinations of the noise covariance and the lagged correlations of the inputs: the first-order term is simply the sum of the noise covariance and the correlation of the inputs.
As opposed to the former input/ouput vision of the neurons, we have considered the membrane potential v to be the solution of a dynamical system. The consequence of this modeling choice is that not only the spatial correlations, but also the temporal correlations are learned. Due to the fact we take the transients into account, the activity never converges but it lives between the representation of the inputs. Therefore, it links concepts together.
The parameter μ is the ratio of the time-scales between the inputs and the activity variable. If , the inputs are infinitely slow and the activity variable has enough time to converge towards its equilibrium point. When μ grows, the dynamics becomes more and more transient, it has no time to converge. Therefore, if the inputs are extremely slow, the network only learns the spatial correlation of the inputs. If the inputs are fast, it also learns the temporal correlations. This is illustrated in Figure 3.
This suggests that learning associations between concepts, for instance, learning words in two different languages, may be optimized by presenting two words to be associated circularly with a certain frequency. Indeed, increasing the frequency (with a fixed duration of exposition to each word) amounts to increasing μ. Therefore, the network learns better the temporal correlations of the inputs and thus strengthens the link between these two concepts.
According to the model of resonator neuron , Section 3.3 suggests that neurons and synapses with a preferred frequency of oscillation will preferably extract the correlation of the inputs filtered by a band pass filter centered on the intrinsic frequency of the neurons.
Actually, it has been observed that the auditory cortex is tonotopically organized, i.e., the neurons are arranged by frequency . It is traditionally thought that this is achieved thanks to a particular connectivity between the neurons. We exhibit here another mechanism to select this frequency which is solely based on the parameters of the neurons: a network with a lot of different neurons whose intrinsic frequencies are uniformly spread is likely to perform a Fourier-like operation: decomposing the signal by frequency.
In particular, this emphasizes the fact that the network does not treat space and time similarly. Roughly speaking, associating several pictures and associating several sounds are therefore two different tasks which involve different mechanisms.
In this paper, the original hierarchy of the network has been neglected: the network is made of neurons which receive external inputs. A natural way to include a hierarchical structure (with layers for instance), without changing the setup of the paper, is therefore to remove the external input to some neurons. However, according to Theorem 3.5 (and its extensions Theorems B.10 and B.14), one can see that these neurons will be disconnected from the others at the first order (if the noise is spatially uncorrelated). Linear activities imply that the high level neurons disconnect from others, which is a problem. In fact, one can observe that the second-order term in Theorem 3.5 is not null if the noise matrix Σ is not diagonal. In fact, this is the noise between neurons which will recruit the high level neurons to build connections from and to them.
It is likely that a significant part of noise in the brain is locally induced, e.g., local perturbations due to blood vessels or local chemical signals. In a way, the neurons close to each other share their noise and it seems reasonable to choose the matrix Σ so that it reflects the biological proximity between neurons. In fact, Σ specifies the original structure of the network and makes it possible for close-by neurons to recruit each other.
Another idea to address hierarchy in networks would be to replace the synaptic decay term by another homeostatic term  which would enforce the emergence of a strong hierarchical structure.
It is also interesting to observe that most of the noise contribution to the equilibrium connectivity for STDP learning (see Theorem B.14) vanishes if the learning is purely skew-symmetric, i.e., . In fact, it is only the symmetric part of learning, i.e., the Hebbian mechanism, that writes the noise in the connectivity.
We have shown that there is a natural analogous STDP learning for spiking neurons in our case of linear neurons. This asymmetric rule converges to a final connectivity which can be decomposed into symmetric and skew-symmetric parts. The first one is similar to the symmetric Hebbian learning case, emphasizing that the STDP is nothing more than an asymmetric Hebbian-like learning rule. The skew-symmetric part of the final connectivity is the cross-correlation between the inputs and their derivatives.
In a way, the noise terms generate random patterns which tend to be forgotten by the network due to the leak term . The only drift is due to which is the expectation of the vector field defining the dynamics of inputs with a measure being the scalar product between the activity variable and the inputs. In other words, if the activity is close to the inputs at a given time , i.e., is large, then the activity will evolve in the same direction as what this input would have done. The network has modeled the temporal structure of the inputs. The spontaneous activity predicts and replays the inputs the network has learned.
There are still numerous challenges to carry on in this direction.
First, it seems natural to look for an application of these mathematical methods to more realistic models. The two main limitations of the class of models we study in Section 3 are (i) the activity variable is governed by a linear equation and (ii) all the neurons are assumed to be identical. The mathematical analysis in this paper was made possible by the assumption that the neural network has a linear dynamics, which does not reflect the intrinsic non-linear behavior of the neurons. However, the cornerstone of the application of temporal averaging methods to a learning neural network, namely Property 3.3, is similar to the behavior of Poisson processes  which has useful applications for learning neural networks [19, 20]. This suggests that the dynamics studied in this paper might be quite similar to some non-linear network models. Studying more rigorously the extension of the present theory to non-linear and heterogeneous models is the next step toward a better modeling of biologically plausible neural networks.
Second, we have shown that the equilibrium connectivity was made of a symmetric and antisymmetric term. In terms of statistical analysis of data sets, the symmetric part corresponds to classical correlation matrices. However, the antisymmetric part suggests a way to improve the purely correlation-based approach used in many statistical analyses (e.g., PCA) toward a causality-oriented framework which might be better suited to deal with dynamical data.
A.1 Long-time behavior of inhomogeneous Markov processes
where and are τ-periodic.
The first point of the following theorem gives the definition of evolution systems of measures, which generalizes the notion of invariant measures in the case of inhomogeneous Markov processes. The exponential estimate of 2. in the following theorem is a key point to prove the averaging principle of Theorem 2.2.
Theorem A.1 ()
- 1.There exists a unique τ-periodic family of probability measures such that for all functions ϕ continuous and bounded,
- 2.Furthermore, under stronger dissipativity condition, the convergence of the law of X to μ is exponentially fast. More precisely, for any , there exist and such that for all ϕ in the space of p-integrable functions with respect to , ,
A.2 Proof of Property 2.3
The functions F, G, Σ satisfy Assumptions 2.1-2.3 restricted on .
ℰ is invariant under the flow of , as defined in (7).
Then for any initial condition , system (4) is asymptotically well posed in probability and satisfies the conclusion of Theorem 2.2.
with the same initial condition as .
Finally, we define and .
Let us remark at this point that in order to prove that (which is our aim), it is sufficient to work on the bounded stopping time , since . In other words, the realizations of which stay longer than T inside ℰ are not problematic. Therefore, we introduce .
- 1.For any , one controls the difference between and on since one controls the difference between the drifts. By an application of Lemma A.3 below (we need here the moment Assumption 2.3(i)), there exists a constant C (which may depend on ) such that(21)
- 2.One needs now to control the situation outside , that is, on . The idea is that while one does not control the difference between G and anymore, one can still choose β sufficiently small such that becomes arbitrary close to ℰ, hence implying that and are arbitrary close with high probability, namely(23)
where we have used the Cauchy-Schwarz inequality and the moment Assumption 2.3(ii) (yielding the constant ) in the second line.
So, we deduce by the Markov inequality that is arbitrary small in probability.
In the following lemma, we show that the solutions of two SDEs, whose drifts are close on a subset of the state space, remain close on a finite time interval. The difficulty here lies in the fact that we deal with only locally Lipschitz coefficients.
- 1.Approximation assumption:
- 2.Local Lipschitz assumption: for all with , there exists a constant such that
- 3.Boundedness assumption: there exists and such that
and if , then there exists such that .
Proof Although the Lipschitz constant is not bounded on ℋ, we can use the boundedness assumption to show that the probability of reaching a level R before time T will be very small for large R, and then use the classical strategy inside where everything works as if the coefficients were globally Lipschitz. A similar strategy is used in  to prove a strong convergence theorem for the Euler scheme without the global Lipschitz assumption. We adapt here the ideas of their proof to our setting.
We also denote .
B.1 Notations and definitions
Throughout the paper, lower-case normal letters are constants, lower-case bold letters are vectors or vector-valued functions, and upper-case bold letters are matrices.
are parameters of the network. We also define for Section 3.3 and , a fixed noise matrix, for Section 3.4. We write .
is the number of neurons in the network.
is the field of membrane potential in the network.
is the field of inputs to the network. We write
is the tensor product between u and v, which simply means .
is the connectivity of the network. Throughout the paper, we assume .
is the scalar product between two vectors .
for is the norm of , i.e., . And similarly for the connectivity matrices of with a double sum.
is the transpose of the matrix .
is the cross-correlation matrix of two compactly supported and differentiable functions from ℝ to , i.e.,
H is the Heaviside function, i.e.,
The real functions(30)
are integrable on ℝ.
B.1.1 Notations for the Appendix
This suggests one should think of v as a semi-continuous matrix of and of as a continuous matrix of , such that and . Indeed, in this framework the convolution with g is nothing but the continuous matrix multiplication between v and a continuous Toeplitz matrix generated row by row by g. Hence, the operator ‘⋅’ can be though of as a matrix multiplication.
where and ℋ are their associated continuous matrices. More generally, the bold curved letters , , represent these continuous Toeplitz matrices which are well defined through their action as convolution operators with g, v, and w. The previous formulation naturally expresses the symmetry of relation (14).
B.2 Hebbian learning with linear activity
In this part, we consider system (12).
B.2.1 Application of temporal averaging theory
where is the -periodic attractor of , where is supposed to be fixed.
Computation of the averaged vector field :
where is the probability density function of the Gaussian law with mean v and covariance Q. And Q is the unique solution of (9), with . This leads to .
Invariance of under the flow of (13):
The argument here is that of the inward pointing subspace. We intend to find a condition under which the flow is pointing inward the space . Roughly speaking, this will be done by defining a real valued function g strictly negative on the subspace and positive outside and then showing that its gradient (or differential) on the border goes in the opposite direction of the flow, i.e., for .