Multiscale analysis of slowfast neuronal learning models with noise
 Mathieu Galtier^{1, 2}Email author and
 Gilles Wainrib^{3}
DOI: 10.1186/21908567213
© M. Galtier, G. Wainrib; licensee Springer 2012
Received: 19 April 2012
Accepted: 26 October 2012
Published: 22 November 2012
Abstract
This paper deals with the application of temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning. Three timescales arise for these models: (i) the fast neuronal dynamics, (ii) the intermediate external input to the system, and (iii) the slow learning mechanisms. Based on this timescale separation, we apply an extension of the mathematical theory of stochastic averaging with periodic forcing in order to derive a reduced deterministic model for the connectivity dynamics. We focus on a class of models where the activity is linear to understand the specificity of several learning rules (Hebbian, trace or antisymmetric learning). In a weakly connected regime, we study the equilibrium connectivity which gathers the entire ‘knowledge’ of the network about the inputs. We develop an asymptotic method to approximate this equilibrium. We show that the symmetric part of the connectivity postlearning encodes the correlation structure of the inputs, whereas the antisymmetric part corresponds to the cross correlation between the inputs and their time derivative. Moreover, the timescales ratio appears as an important parameter revealing temporal correlations.
Keywords
slowfast systems stochastic differential equations inhomogeneous Markov process averaging model reduction recurrent networks unsupervised learning Hebbian learning STDP1 Introduction
Complex systems are made of a large number of interacting elements leading to nontrivial behaviors. They arise in various areas of research such as biology, social sciences, physics or communication networks. In particular in neuroscience, the nervous system is composed of billions of interconnected neurons interacting with their environment. Two specific features of this class of complex systems are that (i) external inputs and (ii) internal sources of random fluctuations influence their dynamics. Their theoretical understanding is a great challenge and involves highdimensional nonlinear mathematical models integrating nonautonomous and stochastic perturbations.
Modeling these systems gives rise to many different scales both in space and in time. In particular, learning processes in the brain involve three timescales: from neuronal activity (fast), external stimulation (intermediate) to synaptic plasticity (slow). Here, fast timescale corresponds to a few milliseconds and slow timescale to minutes/hour, and intermediate timescale generally ranges between fast and slow scales, although some stimuli may be faster than neuronal activity timescale (e.g., submilliseconds auditory signals [1]). The separation of these timescales is an important and useful property in their study. Indeed, multiscale methods appear particularly relevant to handle and simplify such complex systems.
First, stochastic averaging principle [2, 3] is a powerful tool to analyze the impact of noise on slowfast dynamical systems. This method relies on approximating the fast dynamics by its quasistationary measure and averaging the slow evolution with respect to this measure. In the asymptotic regime of perfect timescale separation, this leads to a slow reduced system whose analysis enables a better understanding of the original stochastic model.
Second, periodic averaging theory [4], which has been originally developed for celestial mechanics, is particularly relevant to study the effect of fast deterministic and periodic perturbations (external input) on dynamical systems. This method also leads to a reduced model where the external perturbation is timeaveraged.
where ${\mathbf{v}}^{\u03f5}\in {\mathbb{R}}^{p}$ represents the fast activity of the individual elements, ${\mathbf{w}}^{\u03f5}\in {\mathbb{R}}^{q}$ represents the connectivity weights that vary slowly due to plasticity, and $\mathbf{u}(t)\in {\mathbb{R}}^{p}$ represents the value of the external input at time t. Random perturbations are included in the form of a diffusion term, and $(B(t))$ is a standard Brownian motion.
where $\overline{G}$ is constructed as an average of G with respect to a specific probability measure, as explained in Section 2.
This paper first introduces the appropriate mathematical framework and then focuses on applying these multiscale methods to learning neural networks.
The individual elements of these networks are neurons or populations of neurons. A common assumption at the basis of mathematical neuroscience [7] is to model their behavior by a stochastic differential equation which is made of four different contributions: (i) an intrinsic dynamics term, (ii) a communication term, (iii) a term for the external input, and (iv) a stochastic term for the intrinsic variability. Assuming that their activity is represented by the fast variable $\mathbf{v}\in {\mathbb{R}}^{n}$, the first equation of system (1) is a generic representation of a neural network (function F corresponds to the first three terms contributing to the dynamics). In the literature, the level of nonlinearity of the function F ranges from a linear (or almostlinear) system to spiking neuron dynamics [8], yet the structure of the system is universal.
These neurons are interconnected through a connectivity matrix which represents the strength of the synapses connecting the real neurons together. The slow modification of the connectivity between the neurons is commonly thought to be the essence of learning. Unsupervised learning rules update the connectivity exclusively based on the value of the activity variable. Therefore, this mechanism is represented by the slow equation above, where $\mathbf{w}\in {\mathbb{R}}^{n\times n}$ is the connectivity matrix and G is the learning rule. Probably the most famous of these rules is the Hebbian learning rule introduced in [9]. It says that if both neurons A and B are active at the same time, then the synapses from A to B and B to A should be strengthened proportionally to the product of the activity of A and B. There are many different variations of this correlationbased principle which can be found in [10, 11]. Another recent, unsupervised, biologically motivated learning rule is the spiketimingdependent plasticity (STDP) reviewed in [12]. It is similar to Hebbian learning except that it focuses on causation instead of correlation and that it occurs on a faster timescale. Both of these types of rule correspond to G being quadratic in v.
Previous literature about dynamic learning networks is thick, yet we take a significantly different approach to understand the problem. An historical focus was the understanding of feedforward deterministic networks [13–15]. Another approach consisted in precomputing the connectivity of a recurrent network according to the principles underlying the Hebbian rule [16]. Actually, most of current research in the field is focused on STDP and is based on the precise times of the spikes, making them explicit in computations [17–20]. Our approach is different from the others regarding at least one of the following points: (i) we consider recurrent networks, (ii) we study the evolution of the coupled system activity/connectivity, and (iii) we consider bounded dynamical systems for the activity without asking them to be spiking. Besides, our approach is a rigorous mathematical analysis in a field where most results rely heavily on heuristic arguments and numerical simulations. To our knowledge, this is the first time such models expressed in a slowfast SDE formalism are analyzed using temporal averaging principles.
The purpose of this application is to understand what the network learns from the exposition to timedependent inputs. In other words, we are interested in the evolution of the connectivity variable, which evolves on a slow timescale, under the influence of the external input and some noise added on the fast variable. More precisely, we intend to explicitly compute the equilibrium connectivities of such systems. This final matrix corresponds to the knowledge the network has extracted from the inputs. Although the derivation of the results is mathematically tough for untrained readers, we have tried to extract widely understandable conclusions from our mathematical results and we believe this paper brings novel elements to the debate about the role and mechanisms of learning in large scale networks.
Although the averaging method is a generic principle, we have made significant assumptions to keep the analysis of the averaged system mathematically tractable. In particular, we will assume that the activity evolves according to a linear stochastic differential equation. This is not very realistic when modeling individual neurons, but it seems more reasonable to model populations of neurons; see Chapter 11 of [7].
The paper is organized as follows. Section 2 is devoted to introducing the temporal averaging theory. Theorem 2.2 is the main result of this section. It provides the technical tool to tackle learning neural networks. Section 3 corresponds to application of the mathematical tools developed in the previous section onto the models of learning neural networks. A generic model is described and three different particular models of increasing complexity are analyzed. First, Hebbian learning, then tracelearning, and finally STDP learning are analyzed for linear activities. Finally, Section 4 is a discussion of the consequences of the previous results from the viewpoint of their biological interpretation.
2 Averaging principles: theory
In this section, we present multiscale theoretical results concerning stochastic averaging of periodically forced SDEs (Section 2.3). These results combine ideas from singular perturbations, classical periodic averaging and stochastic averaging principles. Therefore, we recall briefly, in Sections 2.1 and 2.2, several basic features of these principles, providing several examples that are closely related to the application developed in Section 3.
2.1 Periodic averaging principle
We present here an example of a slowfast ordinary differential equation perturbed by a fast external periodic input. We have chosen this example since it readily illustrates many ideas that will be developed in the following sections. In particular, this example shows how the ratio between the timescale separation of the system and the timescale of the input appears as a new crucial parameter.
In this system, one can distinguish various asymptotic regimes when ${\u03f5}_{1}$ and ${\u03f5}_{2}$ are small according to the asymptotic value of μ:

Regime 1: Slow input $\mu =0$:
since $\frac{1}{2\pi}{\int}_{0}^{2\pi}sin{(s)}^{2}\phantom{\rule{0.2em}{0ex}}ds=\frac{1}{2}$.

Regime 2: Fast input $\mu =\mathrm{\infty}$:
and when ${\u03f5}_{1}\to 0$, one does not recover the same asymptotic behavior as in Regime 1.

Regime 3: Timescales matching $0<\mu <\mathrm{\infty}$:
since $\frac{1}{2\pi}{\int}_{0}^{2\pi}{\overline{v}}_{\mu}{(t/\mu )}^{2}\phantom{\rule{0.2em}{0ex}}dt=\frac{1}{2(1+{\mu}^{2})}$.
 1.
the two limits ${\u03f5}_{1}\to 0$ and ${\u03f5}_{2}\to 0$ do not commute,
 2.
the ratio μ between the internal timescale separation ${\u03f5}_{1}$ and the input timescale ${\u03f5}_{2}$ is a key parameter in the study of slowfast systems subject to a timedependent perturbation.
2.2 Stochastic averaging principle
with initial conditions ${\mathbf{v}}^{\u03f5}(0)={\mathbf{v}}_{0}$, ${\mathbf{w}}^{\u03f5}(0)={\mathbf{w}}_{0}$, and where ${\mathbf{w}}^{\u03f5}\in {\mathbb{R}}^{q}$ is called the slow variable, ${\mathbf{v}}^{\u03f5}\in {\mathbb{R}}^{p}$ is the fast variable, with F, G, Σ smooth functions ensuring the existence and uniqueness for the solution $({\mathbf{v}}^{\u03f5},{\mathbf{w}}^{\u03f5})$, and $B(t)$ a pdimensional standard Brownian motion, defined on a filtered probability space $(\mathrm{\Omega},\mathcal{F},\mathbb{P})$. Timescale separation in encoded in the small parameter ϵ, which denotes in this section a single positive real number.
and w the solution of $\frac{d\mathbf{w}}{dt}=\overline{G}(\mathbf{w})$ with the initial condition $\mathbf{w}(0)={y}_{0}$. Under some dissipativity assumptions, the stochastic averaging principle [2] states:
As a consequence, analyzing the behavior of the deterministic solution w can help to understand useful features of the stochastic process $({\mathbf{v}}^{\u03f5},{\mathbf{w}}^{\u03f5})$.
Interestingly, the asymptotic behavior of ${w}^{\u03f5}$ for small ϵ is characterized by a deterministic trajectory that depends on the strength σ of the noise applied to the system. Thus, the stochastic averaging principle appears particularly interesting when unraveling the impact of noise strength on slowfast systems.
Many other results have been developed since, extending the setup to the case where the slow variable has a diffusion component or to infinitedimensional settings for instance, and also refining the convergence study, providing homogenization results concerning the limit of ${\u03f5}^{1/2}({\mathbf{w}}^{\u03f5}\mathbf{w})$ or establishing large deviation principles (see [23] for a recent monograph). However, fewer results are available in the case of nonhomogeneous SDEs, that is, when the system is perturbed by an external timedependent signal. This setting is of particular interest in the framework of stochastic learning models, and we present the main relevant mathematical results in the following section.
2.3 Double averaging principle
with $t\to F(\mathbf{v},\mathbf{w},t)\in {\mathbb{R}}^{p}$ a τperiodic function and $\u03f5=({\u03f5}_{1},{\u03f5}_{2})\in {\mathbb{R}}_{+}^{2}$. Parameter ${\u03f5}_{1}$ represents the internal timescale separation and ${\u03f5}_{2}$ the input timescale. We consider the case where both ${\u03f5}_{1}$ and ${\u03f5}_{2}$ are small, that is, a strong timescale separation between the fast variable ${\mathbf{v}}^{\u03f5}\in {\mathbb{R}}^{p}$ and the slow one ${\mathbf{w}}^{\u03f5}\in {\mathbb{R}}^{q}$, and a fast periodic modulation of the fast drift $F(\mathbf{v},\mathbf{w},\cdot )$.
We further denote $\mathbf{z}=(\mathbf{v},\mathbf{w})$.
Accordingly, we denote ${lim}_{\u03f5\to 0}^{\mu}$ the distinguished limit when ${\u03f5}_{1}\to 0$, ${\u03f5}_{2}\to 0$ with ${\u03f5}_{1}/{\u03f5}_{2}\to \mu $.
The following assumption is made to ensure existence and uniqueness of a strong solution to system (4). In the following, $\u3008{\mathbf{z}}_{1},{\mathbf{z}}_{2}\u3009$ will denote the usual scalar product for vectors.
 (i)The functions F, G, and Σ are locally Lipschitz continuous in the space variable z. More precisely, for any $R>0$, there exists a constant ${\alpha}_{R}$ such that$\parallel F(\mathbf{z})F\left({\mathbf{z}}^{\prime}\right)\parallel \le {\alpha}_{R}\parallel \mathbf{z}{\mathbf{z}}^{\prime}\parallel \phantom{\rule{1em}{0ex}}\text{for any}\mathbf{z},{\mathbf{z}}^{\prime}\in {\mathbb{R}}^{p+q}\text{with}\parallel \mathbf{z}\parallel \le R\text{and}\parallel {\mathbf{z}}^{\prime}\parallel \le R.$
 (ii)There exists a constant $R>0$ such that$\underset{\parallel \mathbf{z}\parallel >R,t>0}{sup}\frac{\u3008(F(\mathbf{z},t),G(\mathbf{z})),\mathbf{z}\u3009}{{\parallel \mathbf{z}\parallel}^{2}}<0.$
To control the asymptotic behavior of the fast variable, one further assumes the following.
 (i)The diffusion matrix Σ is bounded$\mathrm{\exists}{M}_{\mathbf{\Sigma}}>0\phantom{\rule{1em}{0ex}}\text{s.t.}\mathrm{\forall}\mathbf{z},\parallel \mathbf{\Sigma}(\mathbf{z})\parallel {M}_{\mathbf{\Sigma}}$
 (ii)There exists ${r}_{0}<0$ such that for all $t\ge 0$ and for all $\mathbf{z},\mathbf{x}\in {\mathbb{R}}^{p+q}$,$\u3008{\mathrm{\nabla}}_{\mathbf{z}}F(\mathbf{z},t)\cdot \mathbf{x},\mathbf{x}\u3009\le {r}_{0}{\parallel \mathbf{x}\parallel}^{2}.$
According to the value of $\mu \in \{0,{\mathbb{R}}_{+}^{\ast},\mathrm{\infty}\}$, the stochastic averaging principle is based on a description of the asymptotic behavior of various rescaled fast frozen processes. More precisely, under Assumptions 2.1 and 2.2, one can deduce that:

For any fixed ${\mathbf{w}}_{0}\in {\mathbb{R}}^{q}$ and ${t}_{0}>0$ fixed, the law of the rescaled timehomogeneous frozen process,$d\mathbf{v}=F(\mathbf{v},{\mathbf{w}}_{0},{t}_{0})\phantom{\rule{0.2em}{0ex}}dt+\mathbf{\Sigma}(\mathbf{v},{\mathbf{w}}_{0})\phantom{\rule{0.2em}{0ex}}dB(t),$
converges exponentially fast to a unique invariant probability measure denoted by ${\rho}^{{\mathbf{w}}_{0},{t}_{0}}(d\mathbf{v})$.

For any fixed ${\mathbf{w}}_{0}\in {\mathbb{R}}^{q}$, there exists a $\frac{\tau}{\mu}$periodic evolution system of measures ${\nu}_{\mu}^{{\mathbf{w}}_{0}}(t,d\mathbf{v})$, different from ${\rho}^{{\mathbf{w}}_{0},t}(d\mathbf{v})$ above, such that the law of the rescaled timeinhomogeneous frozen process,$d\mathbf{v}=F(\mathbf{v},{\mathbf{w}}_{0},\mu t)\phantom{\rule{0.2em}{0ex}}dt+\mathbf{\Sigma}(\mathbf{v},{\mathbf{w}}_{0})\phantom{\rule{0.2em}{0ex}}dB(t),$(6)
converges exponentially fast towards ${\nu}_{\mu}^{{\mathbf{w}}_{0}}(t,\cdot )$, uniformly with respect to ${\mathbf{w}}_{0}$ (cf. the Appendix Theorem A.1).

For any fixed ${\mathbf{w}}_{0}\in {\mathbb{R}}^{q}$, the law of the rescaled timehomogeneous frozen process,$d\mathbf{v}=\overline{F}(\mathbf{v},{\mathbf{w}}_{0})\phantom{\rule{0.2em}{0ex}}dt+\mathbf{\Sigma}(\mathbf{v},{\mathbf{w}}_{0})\phantom{\rule{0.2em}{0ex}}dB(t),$
where $\overline{F}(\mathbf{v},{\mathbf{w}}_{0}):={\tau}^{1}{\int}_{0}^{\tau}F(\mathbf{v},{\mathbf{w}}_{0},t)\phantom{\rule{0.2em}{0ex}}dt$, converges exponentially fast towards a unique invariant probability measure denoted by ${\overline{\rho}}^{{\mathbf{w}}_{0}}(d\mathbf{v})$.
According to the value of μ, we introduce a vector field ${\overline{G}}_{\mu}$ which will play a role similar to $\overline{G}$ introduced in equation (2).
Notation We may denote the periodic system of measures ${\nu}_{\mu}^{\mathbf{w}}(t,d\mathbf{v})$ associated with (6) by ${\nu}_{\mu}^{\mathbf{w}}[F,\mathbf{\Sigma}](t,d\mathbf{v})$ to emphasize its relationship with F and Σ. Accordingly, we may denote ${\overline{G}}_{\mu}(\mathbf{w})$ by ${\overline{G}}_{\mu}^{[F,\mathbf{\Sigma}]}(\mathbf{w})$.
We are now able to present our main mathematical result. Extending Theorem 2.1, the following theorem describes the asymptotic behavior of the slow variable ${\mathbf{w}}^{\u03f5}$ when $\u03f5\to 0$ with ${\u03f5}_{1}/{\u03f5}_{2}\to \mu $. We refer to [6] for more details about the full mathematical proof of this result.
 1.
The extremal cases $\mu =0$ and $\mu =\mathrm{\infty}$ are not covered in full rigor by Theorem 2.2. However, the study of the sequential limits ${\u03f5}_{1}\to 0$ followed by ${\u03f5}_{2}\to 0$ or ${\u03f5}_{2}\to 0$ followed by ${\u03f5}_{1}\to 0$ can be deduced from an appropriate combination of classical periodic and stochastic averaging theorems:

Slow input: If one considers the case where the limit ${\u03f5}_{1}\to 0$ is taken first, so that from Theorem 2.1 with fast variable ${\mathbf{v}}^{\u03f5}$ and slow variables ${\mathbf{w}}^{\u03f5}$ and t (with the trivial equation $\dot{t}=1$), ${\mathbf{w}}^{\u03f5}$ is close in probability on finite timeintervals to the solution of the following inhomogeneous ordinary differential equation:$\frac{d\tilde{\mathbf{w}}}{dt}={\int}_{\mathbf{v}\in {\mathbb{R}}^{p}}G(\mathbf{v},\tilde{\mathbf{w}}){\rho}^{\tilde{\mathbf{w}},t/{\u03f5}_{2}}(d\mathbf{v}):=\tilde{G}(\tilde{\mathbf{w}},t/{\u03f5}_{2}).$

Fast input: If the limit ${\u03f5}_{2}\to 0$ is taken first, one first has to perform a classical averaging of the periodic drift $F(\mathbf{v},\mathbf{w},t/{\u03f5}_{2})$ leading to the homogeneous system of SDEs (4), but with $\overline{F}(\mathbf{v},\mathbf{w})$ instead of $F(\mathbf{v},\mathbf{w},t/{\u03f5}_{2})$. Then, an application of Theorem 2.1 on this system gives an averaged vector field${\overline{G}}_{\mathrm{\infty}}(\mathbf{w}):={\int}_{\mathbf{v}\in {\mathbf{R}}^{p}}G(\mathbf{v},\mathbf{w}){\overline{\rho}}^{\mathbf{w}}(d\mathbf{v}).$
 2.
To study the extremal cases $\mu =0$ and $\mu =\mathrm{\infty}$ in full generality, one would need to consider all the possible relationships between ${\u03f5}_{1}$ and ${\u03f5}_{2}$, not only the linear one as in the present article, but also of the type ${\u03f5}_{1}={\u03f5}_{2}^{\alpha}$ for example. In this case, we believe that the regime $\alpha <1$ converges to the same limit as taking the limit ${\u03f5}_{2}$ first and the regime $\alpha >1$ corresponds to taking the limit ${\u03f5}_{1}$ first. The intermediate regime $\alpha =1$ seems to be the only one for which the limit cannot be obtained by combining classical averaging principles. Therefore, the present article is focused on this case, in which the averaged system depends explicitly on the scaling parameter μ. Moreover, in terms of applications, this parameter can have a relatively easy interpretation in terms of the ratio of timescales between intrinsic neuronal activity and typical stimulus timescales in a given situation. Although the zeroth order limit (i.e., the averaged system) seems to depend only on the position of α with respect to 1, it seems reasonable to expect that the fluctuations around the limit would depend on the precise value of α. This is a difficult question which may deserve further analysis.
 3.By a rescaling of the frozen process (6), one deduces the following scaling relationships:${\nu}_{\mu}^{\mathbf{w}}[F,\mathbf{\Sigma}](t,d\mathbf{v})={\nu}_{1}^{\mathbf{w}}[\frac{F}{\mu},\frac{\mathbf{\Sigma}}{\sqrt{\mu}}](\mu t,d\mathbf{v})$
 4.
It seems reasonable to expect that the above result is still valid when considering ergodic, but not necessarily periodic, time dependency of the function $F(\mathbf{v},\mathbf{w},\cdot )$. In equation (7), instead of integrating ${\nu}_{\mu}^{\mathbf{w}}(t,d\mathbf{v})$ over one period, one should integrate it with respect to an ergodic stationary measure. However, this extension requires nontrivial technical improvements of [5] which are beyond the scope of this paper.
2.3.1 Case of a fast linear SDE with periodic input
with initial condition $\mathbf{v}(0)={\mathbf{v}}_{0}\in {\mathbb{R}}^{p}$, and where $\mathbf{A}\in {\mathbb{R}}^{p\times p}$ is a matrix whose eigenvalues have positive real parts and $\mathbf{u}(\cdot )$ is a τperiodic function.
whose stationary distribution is known to be a centered Gaussian measure with the covariance matrix Q solution of (9); see Chapter 3.2 of [24]. Notice that if A is selfadjoint with respect to ${(\mathbf{\Sigma}\cdot {\mathbf{\Sigma}}^{\prime})}^{1}$ (i.e., $\mathbf{A}\cdot (\mathbf{\Sigma}\cdot {\mathbf{\Sigma}}^{\prime})=(\mathbf{\Sigma}\cdot {\mathbf{\Sigma}}^{\prime})\cdot {\mathbf{A}}^{\prime}$), then the solution is $\mathbf{Q}=\frac{{\mathbf{A}}^{1}\cdot (\mathbf{\Sigma}\cdot {\mathbf{\Sigma}}^{\prime})}{2}=\frac{(\mathbf{\Sigma}\cdot {\mathbf{\Sigma}}^{\prime})\cdot {{\mathbf{A}}^{\prime}}^{1}}{2}$, which will be used in Appendix B.2.
where ${\mathcal{N}}_{\mathbf{x},\mathbf{Q}}$ is the probability density function of the Gaussian law with mean $\mathbf{x}\in {\mathbb{R}}^{q}$ and covariance $\mathbf{Q}\in {\mathbb{R}}^{p\times p}$.
Therefore, due to the linearity of the fast SDE, the periodic system of measure ν is just a constant Gaussian distribution shifted by a periodic function of time $\mathbf{v}(t)$. In case G is quadratic in v, this remark implies that one can perform independently the integral over time and over ${\mathbb{R}}^{p}$ in formula (10) (noting that the crossed term has a zero average). In this case, contributions from the periodic input and from noise appear in the averaged vector field in an additive way.
As in Example 2.1 and as shown above, the behavior of this system when both ${\u03f5}_{1}$ and ${\u03f5}_{2}$ are small depends on the parameter μ defined in (5). More precisely, we have the following three regimes:

Regime 1: slow input:${\overline{G}}_{0}(w)=w+\frac{{\sigma}^{2}}{2}+\frac{1}{2}.$

Regime 2: fast input:${\overline{G}}_{\mathrm{\infty}}(w)=w+\frac{{\sigma}^{2}}{2}.$

Regime 3: timescale matching:${\overline{G}}_{\mu}(w)=w+\frac{{\sigma}^{2}}{2}+\frac{1}{2(1+{\mu}^{2})}.$
2.4 Truncation and asymptotic wellposedness
In some cases, Assumptions 2.12.2 may not be satisfied on the entire phase space ${\mathbb{R}}^{p}\times {\mathbb{R}}^{q}$, but only on a subset. Such situations will appear in Section 3 when considering learning models. We introduce here a more refined set of assumptions ensuring that Theorem 2.2 still applies.
with $\u03f5>0$, $\sigma >0$, $l>0$, $\mu >0$.
For the fast drift $(lw)v$ to be nonexplosive, it is necessary to have $w<l\alpha $ with $\alpha >0$ for all time. The concern about this system comes from the fact that the slow variable w may reach l due to the fluctuations captured in the term ${v}^{2}$, for instance, if κ is not large enough. Such a system may have exponentially growing trajectories. However, we claim that for small enough ϵ, ${w}^{\u03f5}$ will remain close to its averaged limit w for a very long time, and if this limit remains below $l\alpha $, then ${w}^{\u03f5}$ can be considered as wellposed in the asymptotic limit $\u03f5\to 0$. To make this argument more rigorous, we suggest the following definition.
 1.
a unique solution exists until a random time ${\tau}_{\u03f5}$
 2.for all $T>0$,$\underset{\u03f5\to 0}{lim}\mathbb{P}[{\tau}_{\u03f5}\ge T]=1.$
We give in the following proposition sufficient conditions for system (4) to be asymptotically well posed in probability and to satisfy conclusions of Theorem 2.2.
Let us introduce the following set of additional assumptions.
 (i)There exists $p>2$ such that$\text{for any}T0,\phantom{\rule{1em}{0ex}}\underset{\u03f5}{sup}\mathbb{E}[\underset{0\le t\le T}{sup}{\parallel {\mathbf{v}}_{t}^{\u03f5}\parallel}^{p}+{\parallel {\mathbf{w}}_{t}^{\u03f5}\parallel}^{p}]\mathrm{\infty}.$
 (ii)For any $T>0$ and any bounded subset K of ${\mathbb{R}}^{q}$,$\underset{{\u03f5}_{1}>0,\u03f52>0,\mathbf{w}\in K}{sup}\mathbb{E}\left[\underset{0\le t\le T}{sup}{\parallel G({\mathbf{v}}_{t}^{\u03f5},\mathbf{w})\parallel}^{2}\right]<\mathrm{\infty}.$
Remark 2.2 This last set of assumptions will be satisfied in all the applications of Section 3 since we consider linear models with additive noise for the equation of v, implying this variable to be Gaussian and the function G only involves quadratic moments of v; therefore, the moment conditions (i) and (ii) will be satisfied without any difficulty. Moreover, if one considers nonlinear models for the variable v, then the Gaussian property may be lost; however, adding sigmoidal nonlinearity has, in general, the effect of bounding the dynamics, thus making these moment assumptions reasonable to check in most models of interest.
 1.
The functions F, G, Σ satisfy Assumptions 2.12.3 restricted on ${\mathbb{R}}^{p}\times \mathcal{E}$.
 2.
ℰ is invariant under the flow of ${\overline{G}}_{\mu}$, as defined in (7).
Then for any initial condition ${\mathbf{w}}_{0}\in \mathcal{E}$, system (4) is asymptotically well posed in probability and ${\mathbf{w}}^{\u03f5}$ satisfies the conclusion of Theorem 2.2.
Proof See Appendix A.2. □
that is, the subset ${\mathcal{E}}_{\alpha}$ is invariant under the flow of $\overline{G}$.
and that ${w}_{}$ is stable whereas ${w}_{+}$ is unstable. Thus, if $w(0)<l\alpha $ with $\alpha =l{w}_{+}>0$, then $w(t)<l\alpha $ for all $t>0$. In fact, the invariance property is true for all $\alpha \in \phantom{\rule{0.2em}{0ex}}]l{w}_{},l{w}_{+}[$.
3 Averaging learning neural networks
In this section, we apply the temporal averaging methods derived in Section 2 on models of unsupervised learning neural networks. First, we design a generic learning model and show that one can define formally an averaged system with equation (7). However, going beyond the mere definition of the averaged system seems very difficult and we only manage to get explicit results for simple systems where the fast activity dynamics is linear. In the three last subsections, we push the analysis for three examples of increasing complexity.
In the following, we always consider that the initial connectivity is 0. This is an arbitrary choice but without consequences, because we focus on the regime where there is a single globally stable equilibrium point (see Section 3.2.3).
3.1 A generic learning neural network
We now introduce a large class of stochastic neuronal networks with learning models. They are defined as coupled systems describing the simultaneous evolution of the activity of $n\in \mathbb{N}$ neurons and the connectivity between them. We define $\mathbf{v}\in {\mathbb{R}}^{n}$, the activity field of the network, and $\mathbf{W}\in {\mathbb{R}}^{n\times n}$, the connectivity matrix.
where the function ${f}_{i}$ characterizes the intrinsic nonlinear dynamical behavior of neuron i and ${\mathbf{u}}_{i}$ is the input received by neuron i. The stochastic term $\mathbf{\Sigma}\cdot d{B}_{i}(t)$ is added to account for internal sources of noise. In terms of notations, ${(B(t))}_{t\ge 0}$ is a standard ndimensional Brownian motion, Σ is an $n\times n$ matrix, possibly function of v or other variables, and $\mathbf{\Sigma}\cdot d{B}_{i}(t)$ denotes the i th component of the vector $\mathbf{\Sigma}\cdot dB(t)$.
where $\mathcal{S}:\mathbb{R}\to \mathbb{R}$ and ℋ is a function taking the history of ${\mathbf{v}}_{i}$ and ${\mathbf{v}}_{j}$ and returning a real for each time t (to take convolutions into account). In practical cases, they are often taken to be sigmoidal functions. We abusively redefine $\mathcal{S}$ and ℋ as vector valued operators corresponding to the elementwise application of their real counterparts. We also define the function $\mathcal{F}:{\mathbb{R}}^{n}\to {\mathbb{R}}^{n}$ such that $\mathcal{F}{(\mathbf{v})}_{i}={f}_{i}({\mathbf{v}}_{i})$. Together with a slow generic learning rule, this leads to defining a stochastic learning model as the following system of SDEs.
Before applying the general theory of Section 2, let us make several comments about this generic model of neural network with learning. This model is a nonautonomous, stochastic, nonlinear slowfast system.
In order to apply Theorem 2.2, one needs Assumptions 2.1, 2.2, and 2.3 to be satisfied, restricting the space of possible functions $\mathcal{S}$, ℋ, ℱ, Σ, and G. In particular, Assumption 2.2(ii) seems rather restrictive since it excludes systems with multiple equilibria and suggests that the general theory is more suited to deal with ratebased networks. However, one should keep in mind that these assumptions are only sufficient, and that the double averaging principle may work as well in systems which do not satisfy readily those assumptions.
As we will show in Section 3.3, a particular form of historydependence can be taken into account, to a certain extent. Indeed, for instance, if the function ℱ is actually a functional of the past trajectory of variable ${\mathbf{v}}^{\u03f5}$ which can be expressed as the solution of an additional SDE, then it may be possible to include a certain form of historydependence. However, purely timedelayed systems do not enter the scope of this theory, although it might be possible to derive an analogous averaging method in this framework.
The noise term can be purely additive or set by a particular function $\mathrm{\Sigma}(\mathbf{v},\mathbf{W})$ as long as it satisfies Assumption 2.2(i), meaning that it must be uniformly nondegenerate.
In the following subsection, we apply the averaging theory to various combinations of neuronal network models, embodied by choices of functions $\mathcal{S}$, ℋ, ℱ, Σ, and various learning rules, embodied by a choice of the function G. We will also analyze the obtained averaged system, describing the slow dynamics of the connectivity matrix in the limit of perfect timescale separation and, in particular, study the convergence of this averaged system to an equilibrium point.
3.2 Symmetric Hebbian learning
One of the simplest, yet nontrivial, stochastic learning models is obtained when considering

A linear model for neuronal activity, namely ${f}_{i}({\mathbf{v}}_{i})=l{\mathbf{v}}_{i}$ with l a positive constant.

A linear model for the synaptic transmission, namely $\mathcal{S}({\mathbf{v}}_{i})={\mathbf{v}}_{i}$ and $\mathcal{H}({\mathbf{v}}_{i},{\mathbf{v}}_{j})={\mathbf{v}}_{j}$.

A constant diffusion matrix Σ (additive noise) proportional to the identity $\mathbf{\Sigma}=\sigma Id$ (spatially uncorrelated noise).

A Hebbian learning rule with linear decay, namely ${G}_{ij}(\mathbf{W},\mathbf{v})=\kappa {\mathbf{W}}_{ij}+{\mathbf{v}}_{i}{\mathbf{v}}_{j}$. Actually, it corresponds to the tensor product: ${\{\mathbf{v}\otimes \mathbf{v}\}}_{ij}={\mathbf{v}}_{i}{\mathbf{v}}_{j}$.
where neurons are assumed to have the same decay constant: $\mathbf{L}=l{I}_{d}$; u is a periodic continuous input (it replaces ${\mathbf{u}}^{\mathrm{ext}}$ in the previous section); $\sigma ,{\u03f5}_{1},{\u03f5}_{2},\kappa \in {\mathbb{R}}_{+}$ with ${\u03f5}_{1},{\u03f5}_{2}\ll 1$ and $B(t)$ is ndimensional Brownian noise.
The first question that arises is about the wellposedness of the system: What is the definition interval of the solutions of system (12)? Do they explode in finite time? At first sight, it seems there may be a runaway of the solution if the largest real part among the eigenvalues of W grows bigger than l. In fact, it turns out this scenario can be avoided if the following assumption linking the parameters of the system is satisfied.
where ${u}_{m}={sup}_{t\in {\mathbb{R}}_{+}}{\parallel \mathbf{u}(t)\parallel}_{2}$.
It corresponds to making sure the external (i.e., ${u}_{m}$) or internal (i.e., σ) excitations are not too large compared to the decay mechanism (represented by κ and l). Note that if $p\in \phantom{\rule{0.2em}{0ex}}]0,1[$, ${u}_{m}$ and d are fixed, it is sufficient to increase κ or l for this assumption to be satisfied.
is invariant by the flow of the averaged system $\overline{G}$, where $\mathbf{W}\ge 0$ means W is semidefinite positive and $\mathbf{W}<p\mathbf{L}$ means $p\mathbf{L}\mathbf{W}$ is definite positive. Therefore, the averaged system is defined and bounded on ${\mathbb{R}}_{+}$. The slow/fast system being asymptotically close to the averaged system, it is therefore asymptotically welldefined in probability. This is summarized in the following theorem.
where $\overline{\mathbf{v}}(t)$ is the $\frac{\tau}{\mu}$periodic attractor of $\frac{d\overline{\mathbf{v}}}{dt}=(\mathbf{W}\mathbf{L})\cdot \overline{\mathbf{v}}+\mathbf{u}(\mu t)$, where $\mathbf{W}\in {\mathbb{R}}^{n\times n}$ is supposed to be fixed.
Proof See Theorem B.1 in Appendix B.2. □
In the following, we focus on the averaged system described by (13). Its righthand side is made of three terms: a linear and homogeneous decay, a correlation term, and a noise term. The last two terms are made explicit in the following.
3.2.1 Noise term
As seen in Section 2, in the linear case, the noise term Q is the unique solution of the Lyapunov equation (9) with $\mathbf{A}=\mathbf{W}\mathbf{L}$ and $\mathrm{\Sigma}=\sigma Id$. Because the noise is spatially uncorrelated and identical for each neuron and also because the connectivity is symmetric, observe that $\mathbf{Q}=\frac{{\sigma}^{2}}{2}{(\mathbf{L}\mathbf{W})}^{1}$ is the unique solution of the system.
In more complicated cases, the computation of this term appears to be much more difficult as we will see in Section 3.4.
3.2.2 Correlation term
This term corresponds to the autocorrelation of neuronal activity. It is only implicitly defined; thus, this section is devoted to finding an explicit form depending only on the parameters l, μ, τ, the connectivity W, and the inputs u. Actually, one can perform an expansion of this term with respect to a small parameter corresponding to a weakly connected expansion. Most terms vanish if the connectivity W is small compared to the strength of the intrinsic decaying dynamics of neurons l.
With this notation, it is simple to think of v as a ‘semicontinuous matrix’ of ${\mathbb{R}}^{n\times [0,\frac{\tau}{\mu}[}$. Hence, the operator ‘⋅’ can be though of as a matrix multiplication. Similarly, the transpose operator turns a matrix $\overline{\mathbf{v}}\in {\mathbb{R}}^{n\times [0,\frac{\tau}{\mu}[}$ into a matrix ${\overline{\mathbf{v}}}^{\prime}\in {\mathbb{R}}^{[0,\frac{\tau}{\mu}[\times n}$. See Appendix B.1 for details about the notations.
It is common knowledge, see [17] for instance, that this term gathers information about the correlation of the inputs. Indeed, if we assume that the input is sufficiently slow, then $\overline{\mathbf{v}}$ has enough time to converge to $\mathbf{u}(t)$ for all $t\in [0,+\mathrm{\infty}[$. Therefore, in the first order $\overline{\mathbf{v}}(t)\simeq {(\mathbf{W}\mathbf{L})}^{1}\cdot \mathbf{u}(t)$. This leads to $\overline{\mathbf{v}}\cdot {\overline{\mathbf{v}}}^{\prime}\simeq {(\mathbf{W}\mathbf{L})}^{1}\cdot \mathbf{u}\cdot {\mathbf{u}}^{\prime}\cdot {({\mathbf{W}}^{\prime}\mathbf{L})}^{1}$. In the weakly connected regime, one can assume that $\mathbf{W}\mathbf{L}\simeq \mathbf{L}$ leading to $\overline{\mathbf{v}}\cdot {\overline{\mathbf{v}}}^{\prime}\simeq \frac{1}{{l}^{2}}\mathbf{u}\cdot {\mathbf{u}}^{\prime}$ which is the autocorrelation of the inputs.
Actually, without the assumption of a slow input, lagged correlations of the input appear in the averaged system. Before giving the expression of these temporal correlations, we need to introduce some notations. First, define the convolution filter ${g}_{l/\mu}:t\mapsto \frac{l}{\mu}{e}^{\frac{l}{\mu}t}H(t)$, where H is the Heaviside function. This family of functions is displayed for different values of $\frac{l}{\mu}$ in Figure 4(a). Note that ${g}_{l/\mu}\to {\delta}_{0}$ when $\frac{l}{\mu}\to +\mathrm{\infty}$, where ${\delta}_{0}$ is the Dirac distribution centered at the origin. In this asymptotic regime, the convolution filter and its iterates ${g}_{l/\mu}\ast \cdots \ast {g}_{l/\mu}$ are equal to the identity.
Observe that ${g}_{l/\mu}^{(k+1)}(t)=\frac{{l}^{k+1}}{{\mu}^{k+1}k!}{t}^{k}{e}^{\frac{l}{\mu}t}H(t)$. Therefore, ${\parallel {g}_{l/\mu}^{(k+1)}\parallel}_{1}=\frac{\mathrm{\Gamma}(k+1)}{k!}=1$. Thanks to Young’s inequality for convolutions, which says that ${\parallel \mathbf{u}\ast {g}_{l/\mu}^{(k)}\parallel}_{2}\le {\parallel \mathbf{u}\parallel}_{2}{\parallel {g}_{l/\mu}^{(k)}\parallel}_{1}$, it can be proved that ${\parallel {\mathbf{C}}^{k,q}\parallel}_{2}\le 1$.
We intend to express the correlation term as an infinite converging sum involving these filtered correlations. In this perspective, we use a result we have proved in [25] to write the solution of a general class of nonautonomous linear systems (e.g., $\frac{d\overline{\mathbf{v}}}{dt}=(\mathbf{W}\mathbf{L})\cdot \overline{\mathbf{v}}+\mathbf{u}(t)$) as an infinite sum, in the case $\mu =1$.
where ${g}_{l}:t\mapsto l{e}^{lt}H(t)$.
Proof See Lemma B.2 in Appendix B.2. □
This is a decomposition of the solution of a linear differential system on the basis of operators where the spatial and temporal parts are decoupled. This important step in a detailed study of the averaged equation cannot be achieved easily in models with nonlinear activity. Everything is now set up to introduce the explicit expansion of the correlation we are using in what follows. Indeed, we use the previous result to rewrite the correlation term as follows.
Proof See Theorem B.3 in Appendix B.2. □
This infinite sum of convolved filters is reminiscent of a property of Hawkes processes that have a linear inputoutput gain [26].
The speed of inputs characterized by μ only appears in the temporal profiles ${g}_{l/\mu}^{(k)}\ast {{g}_{l/\mu}^{\prime}}^{(q)}$. In particular, if the inputs are much slower than neuronal activity timescale, i.e., $\mu =0$, then ${g}_{+\mathrm{\infty}}={\delta}_{0}$ and $\mathbf{u}\ast {g}_{+\mathrm{\infty}}=\mathbf{u}$. Therefore, ${\mathbf{C}}^{k,q}={\mathbf{C}}^{0,0}$ and the sums in the formula of Property 3.3 are separable, leading to $\overline{\mathbf{v}}\cdot {\overline{\mathbf{v}}}^{\prime}={(\mathbf{L}\mathbf{W})}^{1}\cdot \mathbf{u}\cdot {\mathbf{u}}^{\prime}\cdot {(\mathbf{L}{\mathbf{W}}^{\prime})}^{1}$, which corresponds to the heuristic result previously explained.
3.2.3 Global stability of the equilibrium point
Now that we have found an explicit formulation for the averaged system, it is natural to study its dynamics. Actually, we prove in the following that if the connectivity W is kept smaller than $\frac{l}{3}$, i.e., Assumption 3.1 is verified with $p\le \frac{1}{3}$, then the dynamics is trivial: the system converges to a single equilibrium point. Indeed, under the previous assumption, the system can be written $\overline{G}(\mathbf{W})=\kappa \mathbf{W}+F(\mathbf{W})$, where F is a contraction operator on ${E}_{\frac{1}{3}}$. Therefore, one can prove the uniqueness of the fixed point with the Banach fixed point argument and exhibit an energy function for the system.
Theorem 3.4 If Assumption 3.1 is verified for $p\le \frac{1}{3}$, then there is a unique equilibrium point in the invariant subset ${E}_{p}$ which is globally, asymptotically stable.
Proof See Theorem B.4 in Appendix B.2. □
The fact that the equilibrium point is unique means that the ‘knowledge’ of the network about its environment (corresponding by hypothesis to the connectivity) eventually is unique. For a given input and any initial condition, the network can only converge to the same ‘knowledge’ or ‘understanding’ of this input.
3.2.4 Explicit expansion of the equilibrium point
When the network is weakly connected, the highorder terms in expansion (15) may be neglected. In this section, we follow this idea and find an explicit expansion for the equilibrium connectivity where the strength of the connectivity is the small parameter enabling the expansion. The weaker the connectivity, the more terms can be neglected in the expansion.
In the asymptotic regime $\tilde{p}\to 0$, we have $\frac{\mathbf{W}}{\tilde{p}l}=\mathcal{O}(1)$. This index is the ‘small’ parameter needed to perform the expansion. We also define $\lambda =\frac{{\sigma}^{2}l}{2{u}_{m}^{2}}$, which has information about the way $\tilde{p}$ is converging to zero. In fact, it is the ratio of the two terms of $\tilde{p}$.
With these, we can prove that the equilibrium connectivity ${\mathbf{W}}^{\ast}$ has the following asymptotic expansion in $\tilde{p}$.
Proof See Theorem B.5 in Appendix B.2. □
Not only the spatial correlation is encoded in the weights, but there is also some information about the temporal correlation, i.e., two successive but orthogonal events occurring in the inputs will be wired in the connectivity although they do not appear in the spatial correlations; see Figure 3 for an example.
3.3 Trace learning: bandpass filter effect
In this section, we study an improvement of the learning model by adding a certain form of history dependence in the system and explain the way it changes the results of the previous section. Given that Theorem 2.2 only applies to an instantaneous process, we will only be able to treat the historydependent systems which can be reformulated as instantaneous processes. Actually, this class of systems contains models which are biologically more relevant than the previous model and which will exhibit interesting additional functional behaviors. In particular, this covers the following features:

Trace learning.
where ∗ is the convolution and ${g}_{1}:t\in \mathbb{R}\mapsto {\beta}_{1}{e}^{{\beta}_{1}t}H(t)$. Rolls and Deco numerically show [15] that the temporal convolution, leading to a spatiotemporal learning, makes it possible to perform invariant object recognition. Besides, trace learning appears to be the symmetric part of the biological STDP rule that we detail in Section 3.4.

Damped oscillatory neurons.

Dynamic synapses.
The electrochemical process of synaptic communication is very complicated and nonlinear. Yet, one of the features of synaptic communication we can take into account in a linear model is the shape of the postsynaptic potentials. In this section, we consider that each synapse is a linear filter whose finite impulse response (i.e., the postsynaptic potential) has the shape ${g}_{3}(t)={\beta}_{3}{e}^{{\beta}_{3}t}H(t)$. This is a common assumption which, for instance, is at the basis of traditional rate based models; see Chapter 11 of [7].
where the notations are the same as in Section 3.2. The behavior of a single neuron will be oscillatory damped if $\mathrm{\Delta}=\sqrt{14\frac{l}{\beta}}$ is a pure imaginary number, i.e., $4l>\beta $. This is the regime on which we focus. Actually, the Hebbian linear case of Section 3.2 corresponds to $\beta =+\mathrm{\infty}$ in this delayed system.
This trick makes it possible to deal with some historybased processes where the dependence on the past is exponential.
where $v:t\to \frac{l}{\mu \mathrm{\Delta}}({e}^{\frac{\beta}{2\mu}(1\mathrm{\Delta})t}{e}^{\frac{\beta}{2\mu}(1+\mathrm{\Delta})t})H(t)$. Observe that applying Young’s inequality to convolutions leads to ${\parallel {\tilde{\mathbf{C}}}^{k,q}\parallel}_{2}\le 1$. Actually, Lemma C.3 shows that ${v}^{(k)}={v}_{k}:t\mapsto \frac{\sqrt{\pi \beta}}{k!}{e}^{\frac{\beta}{2}t}{(\frac{t}{\mathrm{\Delta}})}^{k+\frac{1}{2}}{J}_{k+\frac{1}{2}}(\frac{\beta \mathrm{\Delta}}{2}t)H(t)$, where ${J}_{n}(z)$ is the Bessel function of the first kind. The value of the L1 norm of v is computed in Appendix C.3. It leads to ${\parallel v\parallel}_{1}=coth(\frac{\pi}{2\mathrm{\Delta}})$ if Δ is a pure imaginary number and ${\parallel v\parallel}_{1}=1$ else.
As before, the existence and uniqueness of a globally attractive equilibrium point is guaranteed if Assumption 3.1 is verified for $p\le \frac{1}{2{\parallel v\parallel}_{1}^{3}+1}$; see Theorem B.9.
3.4 Asymmetric ‘STDP’ learning with correlated noise
where the nonlinear intrinsic dynamics of the neurons is represented by f. Indeed, the term ${\{{a}_{+}{\mathbf{v}}^{\u03f5}(t)\otimes ({\mathbf{v}}^{\u03f5}\ast {g}_{\gamma})(t)\}}_{ij}={a}_{+}{\mathbf{v}}_{i}^{\u03f5}(t){({\mathbf{v}}^{\u03f5}\ast {g}_{\gamma})}_{j}(t)$ is negligible when the neuron is quiet and maximal at the top of the spikes emitted by neuron i. Therefore, it records the value of the presynaptic membrane potential weighted by the function ${g}_{\gamma}$ when the postsynaptic neuron spikes. This accounts for the positive part of Figure 6. Similarly, the negative part corresponds to ${a}_{}({\mathbf{v}}^{\u03f5}\ast {g}_{\gamma})\otimes {\mathbf{v}}^{\u03f5}$.
Actually, this formulation is valid for any nonlinear activity with correlated noise. However, studying the role of STDP in spiking networks is beyond the scope of this paper since we are only able to have explicit results for models with linear activity. Therefore, we will assume that the activity is linear while keeping the learning rule as it was derived in the spiking case, i.e., we assume $f(\mathbf{v})=l\mathbf{v}=\mathbf{L}\cdot \mathbf{v}$ in the system above.
In this framework, the method exposed in Section 3.2 holds with small changes. First, the wellposedness assumption becomes
where ${s}^{2}$ is the maximal eigenvalue of $\mathbf{\Sigma}\cdot {\mathbf{\Sigma}}^{\prime}$.
According to Theorem B.13, the system is also globally asymptotically convergent to a single equilibrium, which we study in the following.
Therefore, the STDP learning rule simply adds an antisymmetric part to the final connectivity keeping the symmetric part as the Hebbian case. Besides, the antisymmetric part corresponds to computing the crosscorrelation of the inputs with its derivative. For highorder terms, this remains true although the temporal profiles are different from the first order. These results are in line with previous works underlying the similarity between STDP learning and differential Hebbian learning, where $G(\mathbf{v})\sim \dot{\mathbf{v}}\otimes \mathbf{v}$; see [29].
4 Discussion
We have applied temporal averaging methods on slow/fast systems modeling the learning mechanisms occurring in linear stochastic neural networks. When we make sure the connectivity remains small, the dynamics of the averaged system appears to be simple: the connectivity always converges to a unique equilibrium point. Then, we performed a weakly connected expansion of this final connectivity whose terms are combinations of the noise covariance and the lagged correlations of the inputs: the firstorder term is simply the sum of the noise covariance and the correlation of the inputs.

As opposed to the former input/ouput vision of the neurons, we have considered the membrane potential v to be the solution of a dynamical system. The consequence of this modeling choice is that not only the spatial correlations, but also the temporal correlations are learned. Due to the fact we take the transients into account, the activity never converges but it lives between the representation of the inputs. Therefore, it links concepts together.
The parameter μ is the ratio of the timescales between the inputs and the activity variable. If $\mu =0$, the inputs are infinitely slow and the activity variable has enough time to converge towards its equilibrium point. When μ grows, the dynamics becomes more and more transient, it has no time to converge. Therefore, if the inputs are extremely slow, the network only learns the spatial correlation of the inputs. If the inputs are fast, it also learns the temporal correlations. This is illustrated in Figure 3.
This suggests that learning associations between concepts, for instance, learning words in two different languages, may be optimized by presenting two words to be associated circularly with a certain frequency. Indeed, increasing the frequency (with a fixed duration of exposition to each word) amounts to increasing μ. Therefore, the network learns better the temporal correlations of the inputs and thus strengthens the link between these two concepts.

According to the model of resonator neuron [30], Section 3.3 suggests that neurons and synapses with a preferred frequency of oscillation will preferably extract the correlation of the inputs filtered by a band pass filter centered on the intrinsic frequency of the neurons.
Actually, it has been observed that the auditory cortex is tonotopically organized, i.e., the neurons are arranged by frequency [31]. It is traditionally thought that this is achieved thanks to a particular connectivity between the neurons. We exhibit here another mechanism to select this frequency which is solely based on the parameters of the neurons: a network with a lot of different neurons whose intrinsic frequencies are uniformly spread is likely to perform a Fourierlike operation: decomposing the signal by frequency.
In particular, this emphasizes the fact that the network does not treat space and time similarly. Roughly speaking, associating several pictures and associating several sounds are therefore two different tasks which involve different mechanisms.

In this paper, the original hierarchy of the network has been neglected: the network is made of neurons which receive external inputs. A natural way to include a hierarchical structure (with layers for instance), without changing the setup of the paper, is therefore to remove the external input to some neurons. However, according to Theorem 3.5 (and its extensions Theorems B.10 and B.14), one can see that these neurons will be disconnected from the others at the first order (if the noise is spatially uncorrelated). Linear activities imply that the high level neurons disconnect from others, which is a problem. In fact, one can observe that the secondorder term in Theorem 3.5 is not null if the noise matrix Σ is not diagonal. In fact, this is the noise between neurons which will recruit the high level neurons to build connections from and to them.
It is likely that a significant part of noise in the brain is locally induced, e.g., local perturbations due to blood vessels or local chemical signals. In a way, the neurons close to each other share their noise and it seems reasonable to choose the matrix Σ so that it reflects the biological proximity between neurons. In fact, Σ specifies the original structure of the network and makes it possible for closeby neurons to recruit each other.
Another idea to address hierarchy in networks would be to replace the synaptic decay term $\kappa \mathbf{W}$ by another homeostatic term [32] which would enforce the emergence of a strong hierarchical structure.

It is also interesting to observe that most of the noise contribution to the equilibrium connectivity for STDP learning (see Theorem B.14) vanishes if the learning is purely skewsymmetric, i.e., ${a}_{+}={a}_{}$. In fact, it is only the symmetric part of learning, i.e., the Hebbian mechanism, that writes the noise in the connectivity.

We have shown that there is a natural analogous STDP learning for spiking neurons in our case of linear neurons. This asymmetric rule converges to a final connectivity which can be decomposed into symmetric and skewsymmetric parts. The first one is similar to the symmetric Hebbian learning case, emphasizing that the STDP is nothing more than an asymmetric Hebbianlike learning rule. The skewsymmetric part of the final connectivity is the crosscorrelation between the inputs and their derivatives.
In a way, the noise terms generate random patterns which tend to be forgotten by the network due to the leak term $l\mathbf{v}$. The only drift is due to $\zeta (\mathbf{u})\cdot {\mathbf{u}}^{\prime}\cdot \mathbf{v}\simeq {\mathbf{E}}_{\u3008\mathbf{v},\mathbf{u}\u3009}(\zeta (\mathbf{u}))$ which is the expectation of the vector field defining the dynamics of inputs with a measure being the scalar product between the activity variable and the inputs. In other words, if the activity is close to the inputs at a given time ${t}^{\ast}\in {\mathbb{R}}_{+}$, i.e., $\u3008\mathbf{v},\mathbf{u}({t}^{\ast})\u3009$ is large, then the activity will evolve in the same direction as what this input would have done. The network has modeled the temporal structure of the inputs. The spontaneous activity predicts and replays the inputs the network has learned.
There are still numerous challenges to carry on in this direction.
First, it seems natural to look for an application of these mathematical methods to more realistic models. The two main limitations of the class of models we study in Section 3 are (i) the activity variable is governed by a linear equation and (ii) all the neurons are assumed to be identical. The mathematical analysis in this paper was made possible by the assumption that the neural network has a linear dynamics, which does not reflect the intrinsic nonlinear behavior of the neurons. However, the cornerstone of the application of temporal averaging methods to a learning neural network, namely Property 3.3, is similar to the behavior of Poisson processes [26] which has useful applications for learning neural networks [19, 20]. This suggests that the dynamics studied in this paper might be quite similar to some nonlinear network models. Studying more rigorously the extension of the present theory to nonlinear and heterogeneous models is the next step toward a better modeling of biologically plausible neural networks.
Second, we have shown that the equilibrium connectivity was made of a symmetric and antisymmetric term. In terms of statistical analysis of data sets, the symmetric part corresponds to classical correlation matrices. However, the antisymmetric part suggests a way to improve the purely correlationbased approach used in many statistical analyses (e.g., PCA) toward a causalityoriented framework which might be better suited to deal with dynamical data.
Appendix A: Stochastic and periodic averaging
A.1 Longtime behavior of inhomogeneous Markov processes
where $t\to b(x,t)$ and $t\to \sigma (x,t)$ are τperiodic.
The first point of the following theorem gives the definition of evolution systems of measures, which generalizes the notion of invariant measures in the case of inhomogeneous Markov processes. The exponential estimate of 2. in the following theorem is a key point to prove the averaging principle of Theorem 2.2.
Theorem A.1 ([5])
 1.There exists a unique τperiodic family of probability measures $\{\mu (s,\cdot ),s\in \mathbf{R}\}$ such that for all functions ϕ continuous and bounded,${\int}_{x\in {\mathbf{R}}^{p}}\mathbf{E}[\varphi ({X}_{t})]\mu (s,dx)={\int}_{x\in {\mathbf{R}}^{p}}\varphi (x)\mu (t,dx).$
 2.Furthermore, under stronger dissipativity condition, the convergence of the law of X to μ is exponentially fast. More precisely, for any $r\in (1,+\mathrm{\infty})$, there exist $M>0$ and $\omega <0$ such that for all ϕ in the space of pintegrable functions with respect to $\mu (t,\cdot )$, ${L}^{r}({\mathbf{R}}^{p},\mu (t,\cdot ))$,$\begin{array}{r}{\int}_{x\in {\mathbf{R}}^{p}}{\parallel \mathbf{E}\left[\varphi \left({X}_{t}^{s,x}\right)\right]{\int}_{{x}^{\prime}\in {\mathbf{R}}^{p}}\varphi \left({x}^{\prime}\right)\mu (t,d{x}^{\prime})\parallel}^{r}\mu (s,dx)\\ \phantom{\rule{1em}{0ex}}\le M{e}^{\omega (ts)}{\int}_{x\in {\mathbf{R}}^{p}}{\parallel \varphi (x)\parallel}^{r}\mu (t,dx).\end{array}$
A.2 Proof of Property 2.3
 1.
The functions F, G, Σ satisfy Assumptions 2.12.3 restricted on ${\mathbb{R}}^{p}\times \mathcal{E}$.
 2.
ℰ is invariant under the flow of ${\overline{G}}_{\mu}$, as defined in (7).
Then for any initial condition ${\mathbf{w}}_{0}\in \mathcal{E}$, system (4) is asymptotically well posed in probability and ${\mathbf{w}}^{\u03f5}$ satisfies the conclusion of Theorem 2.2.
with the same initial condition as $({\mathbf{v}}^{\u03f5},{\mathbf{w}}^{\u03f5})$.
Finally, we define ${T}_{\u03f5}:=min(T,{\tau}_{\u03f5},{\tilde{\tau}}_{\u03f5})$ and ${T}_{\u03f5}^{\beta}:=min(T,{\tau}_{\u03f5}^{\beta},{\tilde{\tau}}_{\u03f5}^{\beta})$.
Let us remark at this point that in order to prove that $\mathbb{P}[{\tau}_{\u03f5}\ge T]\to 1$ (which is our aim), it is sufficient to work on the bounded stopping time $min(T,{\tau}_{\u03f5})$, since $\mathbb{P}[{\tau}_{\u03f5}\ge T]=\mathbb{P}[min(T,{\tau}_{\u03f5})\ge T]$. In other words, the realizations of ${\mathbf{w}}^{\u03f5}$ which stay longer than T inside ℰ are not problematic. Therefore, we introduce $\stackrel{\u02c6}{{\tau}_{\u03f5}}:=min(T,{\tau}_{\u03f5})$.
 1.For any $\beta >0$, one controls the difference between ${\mathbf{w}}^{\u03f5}$ and ${\tilde{\mathbf{w}}}^{\u03f5,\beta}$ on ${\mathcal{E}}^{\beta}$ since one controls the difference between the drifts. By an application of Lemma A.3 below (we need here the moment Assumption 2.3(i)), there exists a constant C (which may depend on $T,\beta ,\dots $) such that$\mathbb{E}\left[\underset{0\le t\le {T}_{\u03f5}^{\beta}}{sup}{\parallel {\mathbf{w}}_{t}^{\u03f5}{\tilde{\mathbf{w}}}_{t}^{\u03f5,\beta}\parallel}^{2}\right]\le C\eta {\delta}^{2}.$(21)
 2.One needs now to control the situation outside ${\mathcal{E}}^{\beta}$, that is, on $\mathcal{E}{\mathcal{E}}^{\beta}$. The idea is that while one does not control the difference between G and ${\tilde{G}}_{\beta}$ anymore, one can still choose β sufficiently small such that ${\mathcal{E}}^{\beta}$ becomes arbitrary close to ℰ, hence implying that ${\stackrel{\u02c6}{\tau}}_{\u03f5}$ and ${T}_{\u03f5}^{\beta}$ are arbitrary close with high probability, namely$\mathrm{\forall}\theta ,\lambda >0,\mathrm{\exists}\beta >0,\phantom{\rule{1em}{0ex}}\mathbb{P}[{\tau}_{\u03f5}{T}_{\u03f5}^{\beta}>\lambda ]<\theta .$(23)
where we have used the CauchySchwarz inequality and the moment Assumption 2.3(ii) (yielding the constant ${K}_{G}$) in the second line.
So, we deduce by the Markov inequality that ${sup}_{{T}_{\u03f5}^{\beta}\le t\le {\stackrel{\u02c6}{\tau}}_{\u03f5}}\parallel {\mathbf{w}}_{t}^{\u03f5}{\tilde{\mathbf{w}}}_{t}^{\u03f5,\beta}\parallel $ is arbitrary small in probability.
□
In the following lemma, we show that the solutions of two SDEs, whose drifts are close on a subset of the state space, remain close on a finite time interval. The difficulty here lies in the fact that we deal with only locally Lipschitz coefficients.