Skip to main content
Fig. 3 | The Journal of Mathematical Neuroscience

Fig. 3

From: Multiscale analysis of slow-fast neuronal learning models with noise

Fig. 3

(a) shows the temporal evolution of the input to a n=8 neurons network. It is made of two spatially random patterns that are shown alternatively. (b) shows the correlation matrix of the inputs. The off-diagonal terms are null because the two patterns are spatially orthogonal. (c), (d), and (e) represent the first order of Theorem 3.5 expansion for different μ. Actually, this approximation is quite good since the percentage of error between the averaged system and the first order, computed by error= W order 1 1 W 1 , have an order of magnitude of 10−4% for the three figures. These figures make it possible to observe the role of μ. If μ is small, i.e., the inputs are slow, then the transient can be neglected and the learned connectivity is roughly the correlation of the inputs; see (a). If μ increases, i.e., the inputs are faster, then the connectivity starts to encode a link between the two patterns that were flashed circularly and elicited responses that did not fade away when the other pattern appeared. The temporal structure of the inputs is also learned when μ is large. The parameters used in this figure are ϵ=0.001, l=12, κ=100, σ=0.02.

Back to article page