Skip to main content

Inhomogeneous Sparseness Leads to Dynamic Instability During Sequence Memory Recall in a Recurrent Neural Network Model

Abstract

Theoretical models of associative memory generally assume most of their parameters to be homogeneous across the network. Conversely, biological neural networks exhibit high variability of structural as well as activity parameters. In this paper, we extend the classical clipped learning rule by Willshaw to networks with inhomogeneous sparseness, i.e., the number of active neurons may vary across memory items. We evaluate this learning rule for sequence memory networks with instantaneous feedback inhibition and show that little surprisingly, memory capacity degrades with increased variability in sparseness. The loss of capacity, however, is very small for short sequences of less than about 10 associations. Most interestingly, we further show that, due to feedback inhibition, too large patterns are much less detrimental for memory capacity than too small patterns.

1 Introduction

Many brain areas exhibit extensive recurrent connectivity. Over decades such neuronal feedback attracted a huge amount of theoretical modeling [13] and one of the most prominent functions that is proposed for the recurrent synaptic connections is that of associative memory. In most such theories, all memory items are generally treated as equal, particularly in terms of the sparseness with which they are neurally represented, i.e., in terms of how many neurons are active during recall. In this paper, we extend a particular class of such auto-association networks, viz., sequence memory networks, to include variable sparseness and thereby add one aspect of variability that is to be expected in biological neural networks.

Memory sequences have been shown to occur in the rodent brain during hippocampal sharp-wave ripple events [4, 5]. The major hypothesis of the present paper is that these sequences are stored in the recurrent connections of the hippocampal network, which is supported by findings of fast coordinated excitatory synaptic currents during sharp-wave ripple events in slices [6].

Here, we build on a previous model of memory sequences [7], which enhances memory capacity by instantaneous feedback inhibition. Both our mean field analysis and our simulations show that, in this model, inhomogeneity in pattern sizes reduces memory capacity, but it does so in an asymmetric way: whereas too small patterns significantly compromise the stability of sequence recall, too large patterns can be compensated for quite robustly.

2 Model

We use the standard formalism of auto-associative networks: a discrete-time dynamical system. The individual time steps may be interpreted as the cycles of the hippocampal ripple oscillations. The states x i (t) of all neurons 1iN at time step t determine the states at time t+1 by a thresholded function

x i (t+1)=Θ ( j = 1 N J i j x j ( t ) θ ) .
(1)

Here, as in many other approaches, we define Θ as the Heaviside function, which is equivalent to restricting the neuron states to binary variables, with x i =1 if neuron i fires and x i =0 if it is silent. The state of the network at time t is thus denoted by x(t) { 0 , 1 } N . The other parameters are the firing threshold θ and the synaptic weights J i j .

In the framework of the dynamical system of (1), associative memory is considered to be the (approximate) recall of a network state x(t+n)= ξ t + n at time t+n after the network has been initialized with some appropriate cue x(t)= ξ t at time t. Recall can either occur as convergence to a dynamical attractor (n) [2], or as a one-step association (n=1) [8, 9]. The specific choice of the synaptic matrix J i j determines which patterns can be recalled, or are stored in the network.

In this paper, we will deal with sequences of one-step associations with binary synapses [8, 10, 11] and instantaneous feedback inhibition [7, 9, 12]. Here, memory sequences are described as sequences of random activity patterns ξ that are binary vectors of dimension N, ξ { 0 , 1 } N . A memory sequence of length Q is an ordered occurrence of activity patterns ξ 1 ξ 2 ξ Q . The number M t of active neurons in each pattern ξ t is called pattern size, and will in general be different for each pattern.

As proposed in [7, 9, 11], we model the weights J i j of the binary synapses as products of two independent binary stochastic processes, J i j = w i j s i j . The first stochastic variable w i j indicates the presence ( w i j =1) or absence ( w i j =0) of a morphological connection, with probability Pr( w i j =1)= c m , called morphological connectivity. The second process s i j is called synaptic state and will be used to store memories. In the potentiated state ( s i j =1), a presynaptic spike increments the postsynaptic potential by 1, whereas in the silent state ( s i j =0), the postsynaptic potential remains unaffected. According to (1), neuron i fires a spike at cycle t+1 if its postsynaptic potential h i (t)= j = 1 N w i j s i j x j (t) at time t exceeds the threshold θ.

Willshaw’s [10] clipped Hebbian rule is used to set the synaptic states s i j such that the network is able to recall the memory sequences: a synapse is in the potentiated state only if it connects two neurons that are activated in sequence at least once.

In the case of homogeneous sparseness, where all patterns have the same number M of active neurons, Willshaw’s rule connects the fraction c/ c m of potentiated synapses in the network with the number P of stored associations by

P= log ( 1 c / c m ) log ( 1 f 2 )
(2)

where f=M/N is the coding ratio. The effective connectivity c defines the noise level during recall, i.e., how many spurious inputs a neuron gets that are not part of the memory pattern to be recalled. If c is too large, the network will exhibit many spurious activations and the memory can no longer be recalled. Equation (2) thus provides a capacity estimate of the network in that it says how many associations are stored at the maximum noise level c.

In the case of inhomogeneous sparseness, this formula is no longer valid and thus we asked what would its generalization be. To this end, we introduce the coding ratio vector

ϕ={ f 0 , f 1 , f 2 ,, f P }
(3)

where M k = f k N is the number of active neurons in the stored pattern ξ k . The elements of ϕ are considered to be random variables and distributed according to the coding ratio distribution p ϕ (ϕ), with mean coding ratio ϕ 0 and standard deviation σ ϕ . For now, we use a Gamma distribution (Fig. 1) for p ϕ (ϕ) with mean ϕ 0 =0.01. By varying the standard deviation σ ϕ we control how much the elements of vector ϕ deviate from its mean ϕ 0 .

Fig. 1
figure 1

Coding ratio distribution p ϕ (ϕ). We use the Gamma distribution in our simulations, shown here for a mean coding ratio ϕ 0 =0.01 and three different values of the variation coefficient σ ϕ / ϕ 0

In analogy to (2), the probability of synaptic potentiation ς=c/ c m can be computed analytically for any given coding ratio vector ϕ (see the Appendix) as follows:

ς=1 k = 1 P (1 f k f k 1 ).
(4)

This expression, however, only provides an implicit dependence on the number P of patterns, and moreover, it of course depends on the specific choice of ϕ. We therefore asked how much ς varies over the statistical ensemble of ϕ. In addition to addressing this question numerically, we could also find analytical expressions for the mean ς and variance σ ς 2 of ς over all possible realizations of ϕ, which depend on the first to fourth-order moments of the coding ratio distribution p ϕ (ϕ) (see the Appendix). For sufficiently small variances σ ϕ 2 , the Gaussian approximations resulting from ς and σ ς 2 fit the empirical distributions of ς very well; see Fig. 2 for 106 random samples of ϕ.

Fig. 2
figure 2

Distribution of the synaptic potentiation probability ς over all possible realizations of the coding ratio vector ϕ for three different values of the number of patterns P and the variation coefficient σ ϕ / ϕ 0 . The histograms (blue) correspond to one million random samples of the coding ratio vector ϕ. The elements of ϕ are drawn randomly from the Gamma distribution with ϕ 0 =0.01, shown in Fig. 1. The normal distribution (red) with mean ς and variance σ ς 2 (see the Appendix) is a good approximation to the true distribution of ς for low values of σ ϕ / ϕ 0

The results show that the variability in ς is actually relatively large (about 10 % for σ ϕ =0.1 ϕ 0 ) and even increases with increasing number of associations P. We therefore decided not to use the expectation value ς for further discussion, but to show empirical distributions of many realizations of ϕ whenever possible.

In order to evaluate the dynamics of sequence retrieval, we mostly do not simulate the full neural network but use a mean field approximation [7, 11] based on two macroscopic dynamic variables: the number m t [0, M t ] of correctly activated neurons (hits) and the number n t [0,N M t ] of incorrectly activated neurons (false alarms). For large network sizes N and large pattern sizes M t , the central limit theorem predicts the distribution of the total number of synaptic inputs h(t) to be Gaussian, and the variables m t and n t can be reinterpreted as expectation values over many realizations of the connectivity matrix. Denoting the mean number of synaptic inputs as μh(t) and the variance as σ 2 h ( t ) 2 h ( t ) 2 , we obtain (see the Appendix) for the On population (should fire),

μ On = c m m t + c m ς n t ,
(5)
σ On 2 = c m m t (1 c m )+ c m ς n t ( 1 c m ς + V ς 2 c m ς ( n t 1 ) )
(6)

and for the Off population (should not fire),

μ Off = c m ς( m t + n t ),
(7)
σ Off 2 = c m ς( m t + n t ) ( 1 c m ς + V ς 2 c m ς ( m t + n t 1 ) ) .
(8)

Willshaw’s learning rule yields correlations in the synaptic states that are captured by the terms proportional to V ς 2 (see the Appendix).

The discrete-time network dynamics in (1) maps to the mean field model such that

( m t + 1 , n t + 1 )= ( T On ( m t , n t ) , T Off ( m t , n t ) )
(9)

with

T On ( m t , n t )= M t + 1 Φ ( μ On θ σ On ) ,
(10)
T Off ( m t , n t )=(N M t + 1 )Φ ( μ Off θ σ Off )
(11)

where Φ(z)[1+erf(z/ 2 )]/2 denotes the cumulative distribution function (cdf) of the normal distribution.

In the framework of this model and following [7, 12], inhibition is introduced as instantaneous negative feedback proportional to the total number m t + n t of active neurons at time t. Formally, this is achieved by substituting θθ+ h t in (10) and (11), where h t =b( m t + n t ). The inhibitory weight is taken as b= c m ς throughout the paper (for discussion see [7]).

3 Results

3.1 Inhomogeneous Sparseness Reduces Dynamic Stability

Some exemplary numerical evaluations of the mean field dynamics from equations (9) and following for different firing thresholds θ and different pattern size inhomogeneities σ ϕ are shown in Fig. 3. Despite being just random samples, these plots already reveal the impact of increasing inhomogeneity in the coding ratio vector ϕ on the network’s ability to successfully retrieve the stored patterns. As the standard deviation σ ϕ grows, the activity fluctuations during replay become more and more pronounced and, at some point, lead to dynamic instability during recall. This is clearly visible for a firing threshold θ=28, where perfect pattern retrieval ( m t / M t =1 and n t /(N M t )=0) is interrupted more and more frequently as σ ϕ increases, eventually preventing the retrieval of the full sequence at σ ϕ / ϕ 0 =20%. There the network falls silent, due to a small pattern in the sequence that no longer generated the required synaptic drive to retrieve the following pattern in the sequence. For lower thresholds (θ=24,26), the network activity may instead explode prematurely due to a big pattern that generates too much synaptic drive and sets the network into a permanently active (epileptic) state. As we simulate a network with instantaneous feedback inhibition, the epileptic state is characterized by approximately half of the neurons being active at any time ( m t / M t n t /(N M t )1/2), where the subset of active neurons changes from time step to time step.

Fig. 3
figure 3

Inhomogeneous pattern sizes lead to dynamic instability during sequence replay. In all graphs, we show the fraction m t / M t of hits (blue) at time step t and the fraction n t /(N M t ) of false alarms (red) during the replay of a sequence of length Q=100, using the mean field model of equations (9) and following. Left to right: increasing inhomogeneity σ ϕ . Bottom to top: increasing firing thresholds θ. Other parameters were N= 10 5 , c m =0.1, c=0.05 and ϕ 0 =0.01

In summary, the range of thresholds under which the network successfully replays the full sequence is severely reduced as the pattern sizes become more and more inhomogeneous.

3.1.1 Replay Success Rate

In order to analyze the numerical results more quantitatively, we introduce a criterion for what we consider to be a successful replay. Following [11], we define the retrieval quality

Γ t m t / M t n t /(N M t )
(12)

as the relative difference between hit ratio and false alarm ratio, and consider a pattern at time t to be retrieved successfully if Γ t >0.5. By running the mean field equations many times with different random realizations of vector ϕ, we obtain an empirical replay success rate ϱ t as the fraction of runs with successful retrieval Γ t >0.5.

Figure 4 shows replay success rates for a sequence of length Q=100 for ϕ 0 =0.01 and varying inhomogeneities σ ϕ . When the P stored associations are still approximately homogeneous (top left), the full sequence can be retrieved with probability one for a large range of firing thresholds θ. As we let inhomogeneity increase, this range becomes narrower (top right), and eventually collapses (bottom), so that only the first items can be retrieved with high probability. Hence, inhomogeneous sparseness strongly affects the replay of long sequences, but does relatively little harm to short sequences (Q5).

Fig. 4
figure 4

Replay success rate ϱ over time t for different firing thresholds θ and a sequence of length Q=100. Panels show different levels of inhomogeneity ( σ ϕ ). Parameters were N= 10 5 , c m =0.1, c=0.05 and ϕ 0 =0.01 leading to P7000 stored patterns

3.1.2 Region of Stable Replay

Sequence retrieval not only critically depends on the firing threshold θ, but also on the mean coding ratio ϕ 0 . We therefore searched for regions of stable replay in ( ϕ 0 ,θ) space (Fig. 5) at time step t=100. The region where sequence replay for homogeneous patterns ( σ ϕ =0) is unfeasible is shown in white for comparison. Replay regions exhibit the typical wedge shape [7, 11]. If the firing threshold θ is too low or the mean coding ratio ϕ 0 is too large, all neurons immediately start to fire and the network falls into an all-active state. If the firing threshold is too high or the mean coding ratio is too small, the network immediately falls into an all-silent state. As patterns become less homogeneous, the wedge-shaped region of replay becomes narrower and narrower, and eventually vanishes for highly inhomogeneous patterns (bottom right). The region of retrieval obtained from the mean field equations was validated with some computer simulations of the corresponding networks of binary neurons (white discs, corresponding to 95 % replay success rate).

Fig. 5
figure 5

Regions of stable sequence replay in ( ϕ 0 ,θ) space. Colors indicate the replay success rate at time step t=100. The white area corresponds to the region of no retrieval for homogeneous pattern sizes ( σ ϕ =0%). Panels show different levels of inhomogeneity ( σ ϕ ). The region of retrieval obtained from the mean field equations was validated with computer simulations of the corresponding networks of binary neurons (white discs, 95 % success rate). Parameters were N= 10 5 , c m =0.1, and c=0.05

3.2 Storage Capacity

As in the classical Willshaw net ([10], and (2)) the mean synaptic connectivity c for a network with inhomogeneous pattern sizes depends on the number P of stored associations, as well; see (4). This allowed us to adjust the mean connectivity to a fixed value c=0.05 by changing the number P of stored associations in the network. So far, we have kept c constant and varied the width parameter σ ϕ of the size distribution. We saw that for large inhomogeneities ( σ ϕ / ϕ 0 20%) replay of long sequences is hardly possible. But what if we reduce c? Intuitively, replay should become more stable if we reduce the “noise” connectivity c. As shown in Fig. 6, this is indeed the case: If the number of stored associations, and thus c, is decreased, sequence retrieval is robust under high inhomogeneity, and may even allow for replay of the full sequence for a whole range of firing thresholds.

Fig. 6
figure 6

Replay success rate for a variation coefficient of σ ϕ / ϕ 0 =25%; cf. Fig. 4. Panels show results for different mean connectivities c, and hence different numbers P of stored associations

However, reducing c comes at the cost of a reduced memory capacity P, and thus we have to find a way to quantify the trade-off between replay stability and capacity for a network with inhomogeneous patterns sizes. To this end, we define the maximum retrievable sequence length

T(ϕ, c m )= max θ T 90 (ϕ, c m ,θ)
(13)

as the maximum number T 90 of time steps for which the replay success rate ϱ T remains above 90 % for a given pattern size vector ϕ and morphological connectivity c m . Since replay stability strongly depends on the firing threshold, the maximum is taken over all possible firing thresholds θ.

Figure 7 shows as a function of the number P of stored associations in the network (as well as the corresponding mean connectivity c). When the total number of stored associations is small, even a sequence consisting of all stored associations may be retrieved under high inhomogeneity ( σ ϕ / ϕ 0 =25%). As P grows, the curve reaches a maximum and then decreases very quickly. The breakdown point and slope depend critically on the degree of inhomogeneity. For the homogeneous case σ ϕ 0, decreases infinitely steeply, and the number P of patterns at the breakdown determines the “classical” storage capacity. For finite inhomogeneities σ ϕ , decreases according to a power law T P α that trades stability vs. capacity P. Since the exponent α is much smaller for high inhomogeneities σ ϕ / ϕ 0 , the net decrease of capacity for short sequences (10) is relatively small compared to a network with homogeneous sparseness (a decrease of 1.8 for σ ϕ / ϕ 0 =25%).

Fig. 7
figure 7

Inhomogeneity reduces memory capacity. a The maximum retrievable sequence length is shown (colored discs) as a function of the number P of associations stored in the network and the corresponding mean connectivity c. Different colors indicate inhomogeneities ( σ ϕ / ϕ 0 ). b Power law fits to the decreasing parts of the graphs in a give rise to a cutoff capacity P c (top) at which the curves start to fall, and an exponent α (negative slopes in a) of the power law decrease. Parameters were N= 10 5 , c m =0.1, and ϕ 0 =0.01

3.3 Asymmetry of the Size Distribution

From our observations of single runs in Fig. 3, we already derived some anecdotal insight into the mechanisms underlying the breakdown of sequence replay: network activity may cease after a small pattern, whereas large patterns may lead to epilepsy. However, it is unclear which of these two ways of terminating replay is more problematic, or whether both occur equally often.

In order to tackle this question about the mechanisms of sequence termination, we investigate the effect of skewness (or asymmetry) in the pattern size distribution, i.e., an imbalance between bigger and smaller than average patterns. So far, we have used the Gamma distribution shown in Fig. 1, which is relatively symmetric for small variation coefficients. To have a better handle on skewness, we now switch to triangular distributions (Fig. 8).

Fig. 8
figure 8

Mechanisms of sequence termination. Top: three triangular coding ratio distributions p ϕ (ϕ) with maximum at ϕ max =0.01, mean coding ratios ϕ 0 = ϕ max (± 2 σ ϕ ), and fixed standard deviation σ ϕ =0.1 ϕ max : a negatively skewed distribution, b symmetric distribution, c positively skewed distribution. Panels (cf. Fig. 5): Replay success rates in the ( ϕ max ,θ) plane. The region of retrieval obtained from the mean field equations was validated with computer simulations of some of the corresponding networks of binary neurons (white discs, 95 % success rate). Parameters were N= 10 5 , c m =0.1, and c=0.05

A symmetric triangular distribution (Fig. 8b) is used for comparison. In order to study the effect of an excess of small patterns we constructed a negatively skewed distribution (Fig. 8a) by cutting away all patterns above the line of symmetry ( ϕ max ) and adding smaller patterns instead. Since this distribution has a lower mean coding ratio ϕ 0 = ϕ max 2 σ ϕ , we increased the number P of patterns to account for the same “noise” connectivity c as in the symmetric case. The region of stable replay is clearly reduced in the negatively skewed distribution as compared to the symmetric one. This reduction could either be because small pattern sizes are intrinsically bad, or because the asymmetry of the distribution is a limiting factor. We therefore also considered the case of an excess of large patterns (Fig. 8c). For such a positively skewed distribution, the asymmetry is the same as for the negatively skewed distribution; however, and interestingly, the region of stable replay is larger than for the symmetric distribution. Again, the connectivity was adjusted to the same value by reducing the number P of associations to compensate for the higher mean coding ratio ϕ 0 = ϕ max + 2 σ ϕ .

From these observations, we conclude that indeed the small patterns are much more problematic for replay with inhomogeneous pattern sizes than the large patterns. To understand why, we compared the shape of the replay regions for the three distributions (a, b, and c), and observe that the slope of the lower side of the wedge is relatively insensitive to skewness, whereas the slope of the upper side of the wedge is very different in each case. Failures owing to activity explosion (the lower side of the wedge) are almost independent of the skewness, due to the instantaneous feedback inhibition in the mean field equations. On the other hand, the upper side of the wedge is determined by the network’s falling into a silent state. Thus, positive skewness (c) is the more robust distribution. Note that this was also apparent in Fig. 5, where the reduction of the region of stability with increasing inhomogeneity was much more pronounced on the upper side of the wedges than on the lower side, despite the relatively symmetric Gamma distribution used there.

Finally, in order to more directly illustrate how replay terminates after small patterns, we show a scatter plot of 104 sample pairs ( M τ , M τ + 1 ), where τ is the last time step at which the pattern was replayed with sufficient quality Γ τ >0.5 (Fig. 9; note that here we again used a Gamma distribution to achieve comparability with Fig. 4). Points below the red line ( M τ > M τ + 1 ) represent sequences for which the last correctly replayed pattern ξ τ was bigger than the following pattern ξ τ + 1 , whereas points above the red line ( M τ < M τ + 1 ) represent sequences for which ξ τ was smaller than ξ τ + 1 . On the low-threshold edge of the stability region, which is prone to over-excitement (θ=25, cf. Fig. 5), a big-to-small pattern transition is as likely to lead to a failure in sequence replay as a small-to-big transition. However, on the high-threshold edge of the stability region (θ=30), where replay eventually dies out, most failures are caused by small-to-big pattern transitions (80% of points above the red line).

Fig. 9
figure 9

Scatter plot of pairs ( M τ , M τ + 1 ), where ξ τ is the last correctly replayed pattern ( Γ τ >0.5) in each sequence, for 104 random samples of ϕ from the Gamma distribution and two different firing thresholds θ as indicated. Parameters were as in the third column of Fig. 4: N= 10 5 , c m =0.1, c=0.05, ϕ 0 =0.01, and σ ϕ / ϕ 0 =15%

Again, these results show that the small patterns are more detrimental for sequence replay than the large patterns, since in the latter case fluctuations can be compensated for by feedback inhibition, whereas the former have no compensatory mechanism.

3.4 Nonlinear Inhibition

So far, we have assumed a linear dependence of instantaneous feedback inhibition on the total network activity, i.e., h t =b( m t + n t ), since it was shown to optimize replay quality [7, 12]. In this final section, we investigate how a particular nonlinear form of inhibition could improve the network’s resilience to inhomogeneity, because (a) physiological data from cortical inhibitory networks suggest supralinear dependence on input [13, 14] and (b) supralinear inhibition effectively provides a positive feedback (with respect to linear inhibition) in cases of too low activity.

To be able to best compare the nonlinear with the linear case, we constructed a nonlinearity that only implements such positive boost for too low activities m t + n t and remains linear for too large activities (see upper panel of Fig. 10b). Formally, it is obtained by replacing the linear term b( m t + n t ) by the function h t =h( m t + n t ), where

h(x)= { κ 1 + e λ ( x ν ) , x ϕ 0 N , b x , x > ϕ 0 N
(14)

with parameters

κ= b λ ( ϕ 0 N ) 2 λ ϕ 0 N 1 ,
(15)
ν= ϕ 0 N 1 λ log(λ ϕ 0 N1),
(16)
λ= 10 4 / ϕ 0
(17)

chosen such that the slope of h(x) at the operation point x= m t + n t = ϕ 0 N is equal to b in both the linear and nonlinear parts.

Fig. 10
figure 10

Regions of stable sequence replay in the ( ϕ 0 ,θ) plane with linear (a) and nonlinear (b) feedback inhibition. Colors indicate the mean retrieval quality Γ ¯ t at time step t=100. Panels show different levels of inhomogeneity ( σ ϕ ). The region of retrieval obtained from the mean field equations was validated with computer simulations of the corresponding networks of binary neurons (white discs, 95 % success rate). Parameters were N= 10 5 , c m =0.1, and c=0.05

Figure 10 shows the mean retrieval quality Γ ¯ t at time step t=100 (averaged over 102 random realizations of vector ϕ) in the ( ϕ 0 ,θ) plane, for linear (a) and nonlinear (b) inhibition and two levels of inhomogeneity σ f / ϕ 0 .

For low inhomogeneity ( σ ϕ / ϕ 0 =5%), although the region of stability is wider in the nonlinear case, the retrieval quality in the gained region is not as good as in the region shared by both feedback strategies (see lighter red stripe in middle panel of Fig. 10b). This finding fits very well to previous papers that report that linear inhibition maximizes replay quality for homogeneous pattern sizes: the gain in robustness is mostly obtained by a reduced replay quality. For a large inhomogeneity ( σ ϕ / ϕ 0 =20%), linear feedback almost completely extinguishes replay, whereas nonlinear inhibition recovers a considerable stable replay region with high retrieval quality.

Supralinear inhibitory feedback at low activity levels thus significantly widens the replay region making the network resilient to higher levels of inhomogeneity than would be possible with linear feedback. The underlying mechanism by which this is achieved can be explained as follows: smaller-than-average patterns generate only little negative feedback and thereby keep up the activity in the network, whereas bigger-than-average patterns are compensated for optimally by linear negative feedback.

4 Discussion

This paper extends previous models of sequence memory [7, 9, 11, 12] that were based on Willshaw’s learning rule [10] to inhomogeneous pattern sizes, i.e., patterns of variable sparseness. Our work reveals that inhomogeneity in the sparseness of stored patterns is detrimental to a recurrent network’s dynamic stability during sequence retrieval. Bigger than average patterns tend to lead the network into an all-active epileptic state as a result of an excessively high synaptic drive, whereas smaller than average patterns tend to lead to an all-silent state as a result of an insufficient synaptic drive. In either case, sequence retrieval is terminated prematurely due to dynamic instability. As expected, the higher the variability in pattern sizes, the higher is the probability of premature sequence termination. Our results thus suggest that a plasticity mechanism that ensures a certain degree of homogeneity in the sparseness of hippocampal representations would be useful for the reliable retrieval of long sequences.

Instantaneous linear feedback inhibition is able to compensate to a certain degree for bigger-than-average patterns, but it does nothing to prevent the network from falling silent since it does not compensate for an insufficient synaptic drive. This asymmetry is reflected on the relative impact of differently skewed pattern size distributions. Compared to a symmetric distribution, negative skewness leads to a smaller region of stable replay, whereas positive skewness leads to a larger region. Positively skewed pattern size distributions are thus more resilient to premature sequence termination under linear feedback. The higher vulnerability to smaller-than-average patterns can be corrected for by introducing a nonlinear negative feedback which is close to zero for lower-than-average network activity. Such supralinear inhibition can make the network resilient to higher levels of inhomogeneity than linear feedback inhibition.

Memory networks with variable sparseness have been studied by Amit and Huang [15, 16] under a different learning paradigm in which old memories are gradually overwritten by new memories, and for several more involved synaptic (meta-)plasticity rules. There, inhomogeneity in the pattern sizes was shown to decrease the signal-to-noise ratio during recall as well.

In contrast to palimpsest models [1722] in which old memories are overwritten, our model assumes that all memories are equally well preserved in the synaptic states of the network, which argues for additional plasticity rules that continuously readjust the synaptic matrix to keep old memories fresh. Such a mechanism would necessarily require ongoing plasticity rules, which may then easily be linked to some sort of pattern size homeostasis that tries to keep the sparseness homogeneous. Such persistent network remodeling fits experimental findings that, at least for a few weeks after memory acquisition, existing memories can be extinct by blocking protein synthesis together with memory reactivation [23], hinting at the presence of plasticity mechanisms during early retrieval.

Appendix

In this appendix, we give the mathematical details of how we derive the expectation values necessary for our dynamical model from the underlying stochasticity of the recurrent synaptic matrix. The effect of learning on the synaptic connections is modeled by binary random variables that take a value s i j =1 if the putative synapse from neuron j to neuron i is in a potentiated state (is able to transmit signals), whereas s i j =0 if it is inactive and cannot contribute to the postsynaptic depolarization. Since not all neurons are considered to be synaptically connected, the real synaptic weight is a product w i j s i j [9] where w i j =1 or 0 according to a binomial process with probability c m (the morphological connectivity) that is supposed to model the existence of a physical synapse. The two random variables s i j and w i j are considered to be independent.

The vector of pattern sizes ϕ=( f 0 ,, f P ) defines how many neurons M t = f t N fire in each pattern ξ t . According to Willshaw’s rule, a given sequence of stored patterns, with sizes specified by ϕ, uniquely defines the matrix of synaptic states s i j : Only those synapses are potentiated for which the presynaptic neuron is active in pattern ξ k and the postsynaptic neuron is active in pattern ξ k + 1 for at least one value k=0,,P1. In order to translate this learning rule into formulas, we have to introduce the theoretical concept of an activation schedule.

A.1 Activation Schedule

Different neurons participate differently in the replay of memory sequences. Formally, each neuron is described by its activation schedule A={ a 1 , a 2 ,, a P }, a k {0,1}, which indicates in which patterns the neuron fires ( a k =1) or not ( a k =0). Since we assume the participation in a pattern to be random, the probability for a specific activation schedule is computed as

P A = ( k = 1 | A | f α ( k ) + δ | A | = 0 ) k = | A | + 1 P (1 f α ( k ) )
(18)

where |A| is the number of patterns in which the neuron is active and α:{1,,P}{1,,P} is a reordering of the patterns such that those in which the neuron is active have the |A| lowest indices. If |A|=0, the first factor equals 1 as indicated by the Kronecker symbol δ | A | = 0 .

The activation schedule of a neuron allows us to compute the fraction ς A of putative activated synapses at a postsynaptic neuron with activation schedule A in analogy to the classical Willshaw idea,

ς A =1 ( k = 1 | A | ( 1 f α ( k ) 1 ) + δ | A | = 0 )
(19)

where the product on the right-hand side is the fraction of synapses remaining inactive after storing P sequential activations, corresponding to |A| learning steps.

In order to compute the mean connectivity in the network, we average over all postsynaptic neurons (i.e., activation schedules) and obtain

ς= A P A ς A
(20)
= A , | A | > 0 P A ( 1 k = 1 | A | ( 1 f α ( k ) 1 ) )
(21)
=1 P | A | = 0 A , | A | > 0 P A k = 1 | A | (1 f α ( k ) 1 )
(22)
= 1 k = 1 P ( 1 f k ) A , | A | > 0 k = 1 | A | f α ( k ) k = | A | + 1 P ( 1 f α ( k ) ) k = 1 | A | ( 1 f α ( k ) 1 )
(23)
=1 k = 1 P (1 f k ) A , | A | > 0 k = | A | + 1 P (1 f α ( k ) ) k = 1 | A | f α ( k ) (1 f α ( k ) 1 )
(24)
=1 k = 1 P (1 f k ) A , | A | > 0 k = 1 P (1 f α ( k ) ) k = 1 | A | f α ( k ) ( 1 f α ( k ) 1 ) 1 f α ( k )
(25)
=1 k = 1 P (1 f k ) k = 1 P (1 f k ) A , | A | > 0 k = 1 | A | f α ( k ) ( 1 f α ( k ) 1 ) 1 f α ( k )
(26)
=1 k = 1 P (1 f k ) ( 1 + A , | A | > 0 k = 1 | A | χ α ( k ) )
(27)
=1 k = 1 P (1 f k ) k = 1 P (1+ χ k )
(28)
=1 k = 1 P (1 f k ) ( 1 + f k ( 1 f k 1 ) 1 f k )
(29)
=1 k = 1 P (1 f k f k 1 )
(30)

where we have introduced the abbreviation χ k = f k ( 1 f k 1 ) 1 f k and used the algebraic identity

1+ A , | A | > 0 k = 1 | A | χ α ( k ) = k = 1 P (1+ χ k ).
(31)

Similarly, the second moment E[ ς A 2 ] is computed as

E [ ς A 2 ] = A P A ς A 2
(32)
= A , | A | > 0 P A ( 1 k = 1 | A | ( 1 f α ( k ) 1 ) ) 2
(33)
= A , | A | > 0 P A ( 1 2 k = 1 | A | ( 1 f α ( k ) 1 ) + k = 1 | A | ( 1 f α ( k ) 1 ) 2 )
(34)
=2ς1+ k = 1 P (1 f k )+ A , | A | > 0 P A k = 1 | A | ( 1 f α ( k ) 1 ) 2
(35)
= 2 ς 1 + k = 1 P ( 1 f k ) + A , | A | > 0 k = 1 | A | f α ( k ) k = | A | + 1 P ( 1 f α ( k ) ) k = 1 | A | ( 1 f α ( k ) 1 ) 2
(36)
= 2 ς 1 + k = 1 P ( 1 f k ) + A , | A | > 0 k = | A | + 1 P ( 1 f α ( k ) ) k = 1 | A | f α ( k ) ( 1 f α ( k ) 1 ) 2
(37)
= 2 ς 1 + k = 1 P ( 1 f k ) + A , | A | > 0 k = 1 P ( 1 f α ( k ) ) k = 1 | A | f α ( k ) ( 1 f α ( k ) 1 ) 2 1 f α ( k )
(38)
= 2 ς 1 + k = 1 P ( 1 f k ) + k = 1 P ( 1 f k ) A , | A | > 0 k = 1 | A | f α ( k ) ( 1 f α ( k ) 1 ) 2 1 f α ( k )
(39)
=2ς1+ k = 1 P (1 f k ) k = 1 P ( 1 + f k ( 1 f k 1 ) 2 1 f k )
(40)
=2ς1+ k = 1 P ( 1 f k + f k ( 1 f k 1 ) 2 )
(41)
=2ς1+ k = 1 P ( 1 f k ( 2 f k 1 f k 1 2 ) ) .
(42)

A.2 Mean and Variance of Total Synaptic Input

With the above two moments, we can find means and variances for the synaptic inputs. We start with the probability of total synaptic input to a postsynaptic cell with activation schedule A, which is binomially distributed according to

P(h|A)= ( m + n h ) ( c m ς A ) h ( 1 c m ς A ) ( m + n h ) .
(43)

The probability of total synaptic input to an average postsynaptic cell can then be obtained as

P(h)= A P A P(h|A).
(44)

The mean value of h depends on whether the postsynaptic cell belongs to the On population (should fire) or the Off population (should not fire). For the Off population, we have

μ Off = h = 0 m + n hP(h)
(45)
= h = 0 m + n h A P A P(h|A)
(46)
= A P A h = 0 m + n hP(h|A)
(47)
= A P A h = 0 m + n h ( m + n h ) ( c m ς A ) h ( 1 c m ς A ) ( m + n h )
(48)
= A P A (m+n) c m ς A
(49)
= c m (m+n) A P A ς A
(50)
= c m ς(m+n)
(51)

and for the On population,

μ On = h = 0 m h P ( h ) + h = 0 n h P ( h )
(52)
= c m m+ c m ςn.
(53)

In order to obtain the variance of h, we compute the second moment of h for the Off population,

E [ h Off 2 ] = h = 0 m + n h 2 P(h)
(54)
= A P A h = 0 m + n h 2 ( m + n h ) ( c m ς A ) h ( 1 c m ς A ) ( m + n h )
(55)
= A P A (m+n) c m ς A ( 1 + c m ς A ( m + n 1 ) )
(56)
= c m (m+n) A P A ς A ( 1 + c m ς A ( m + n 1 ) )
(57)
= c m (m+n) ( A P A ς A + A P A ς A 2 c m ( m + n 1 ) )
(58)
= c m (m+n) ( ς + E [ ς A 2 ] c m ( m + n 1 ) ) .
(59)

The variance is then given by

σ Off 2 =E [ h Off 2 ] μ Off 2
(60)
= c m (m+n) ( ς + E [ ς A 2 ] c m ( m + n 1 ) ) ( c m ς ( m + n ) ) 2
(61)
= c m ς(m+n) ( 1 + E [ ς A 2 ] ς c m ( m + n 1 ) c m ς ( m + n ) )
(62)
= c m ς(m+n) ( 1 c m ς + E [ ς A 2 ] ς 2 ς c m ( m + n 1 ) )
(63)
= c m ς(m+n) ( 1 c m ς + V ς 2 c m ς ( m + n 1 ) )
(64)

where

V ς 2 = E [ ς A 2 ] ς 2 ς 2 = 1 ς 2 ( 2 ς 1 + k = 1 P ( 1 f k ( 2 f k 1 f k 1 2 ) ) ) 1
(65)

is the squared variation coefficient of ς A over all activation schedules A. Similarly, for the On population, we get

σ On 2 = c m m(1 c m )+ c m ςn ( 1 c m ς + V ς 2 c m ς ( n 1 ) ) .
(66)

A.3 Mean and Variance of ς over Pattern Size Distribution

So far, all formulas were obtained for a specific realization of the pattern size (coding ratio) vector ϕ. The pattern sizes themselves can, however, be considered as resulting from a stochastic process as well. We therefore are interested in expectation values over the pattern size distribution to be able to account for average connectivities over many realizations of the network. Such an average connectivity ς upon imprinting the memory with P patterns is given by

ς=1 k = 1 P ( 1 f k f k 1 )
(67)

where indicates the expected value over the size distribution p ϕ (ϕ). The last term can be expanded as follows:

t = 1 P ( 1 f t f t + 1 ) =1 t = 1 P f t f t + 1 + t = 1 P t = t + 1 P f t f t + 1 f t f t + 1
(68)

The last term on the right, as well as higher-order terms, contain overlapping indices (e.g., when t =t+1), so that for each term of order 2k (for k=1,,P), we will have 2j isolated indices, each contributing a term f, and (kj) duplicated indices, each contributing a term f 2 (for j=1,,k). Therefore, we can write

t = 1 P ( 1 f t f t + 1 ) =1+ k = 1 P ( 1 ) k ( j = 1 k n j ( P , k ) f 2 j f 2 k j )
(69)

where n j ( P , k ) is the number of k-combinations of P elements with exactly j non-adjacent element sets. For example, given P elements a 1 , a 2 ,, a P , the 3-combination a 1 a 2 a 3 has j=1 non-adjacent sets, a 1 a 3 a 4 has j=2, and a 1 a 3 a 5 has j=3. After some algebra, we arrive at the expression

n j ( P , k ) = { ( P k + 1 j ) ( k 1 k j ) , j min ( k , P k + 1 ) , 0 , otherwise.
(70)

The mean probability of synaptic potentiation ς can thus be expressed as a function of the first and second order moments of the coding ratio distribution p ϕ (ϕ) as follows:

ς= k = 1 P j = 1 k ( 1 ) k n j ( P , k ) f 2 j f 2 k j
(71)

with the first and second moments

f= ϕ 0 ,
(72)
f 2 = σ ϕ 2 + ϕ 0 2 .
(73)

The second moment ς 2 of the probability of synaptic potentiation is given by

ς 2 = ( 1 k = 1 P ( 1 f k f k 1 ) ) 2
(74)
=12 k = 1 P ( 1 f k f k 1 ) + k = 1 P ( 1 f k f k 1 ) 2
(75)
=2ς1+ k = 1 P ( 1 f k f k 1 ) 2
(76)

where the last term equals

t = 1 P ( 1 f t f t + 1 ) 2
(77)
= 1 t = 1 P ( 2 f t f t + 1 f t 2 f t + 1 2 ) + t = 1 P t = t + 1 P ( 2 f t f t + 1 f t 2 f t + 1 2 ) ( 2 f t f t + 1 f t 2 f t + 1 2 )
(78)
=1+ k = 1 P j = 1 k ( 1 ) k ( P k + 1 j ) S k j .
(79)

Here, we use

S k j = { π k , j = 1 , i = 1 k j + 1 π i S k i , j 1 , 2 j k
(80)

and

π k = t = 1 k ( 2 f t f t + 1 f t 2 f t + 1 2 )
(81)
= m = 0 k ( 1 ) m 2 k m ψ m
(82)

with

ψ m = { f 2 f 2 k 1 , m = 0 , f 2 i = 1 m k m i k m n i ( k 1 , m ) f 2 k m i 1 f 3 2 i f 4 m i + 2 f f 2 i = 1 m i k m n i ( k 1 , m ) f 2 k m i f 3 2 i 1 f 4 m i + f 2 2 i = 1 k m m i m n i ( k 1 , k m ) f 2 k m i f 3 2 i f 4 m i 1 , f 2 2 f 4 k 1 , 1 m k 1 , f 2 2 f 4 k 1 , m = k .
(83)

If p ϕ (ϕ) is the Gamma distribution, the higher moments can be computed as

f 3 =2 σ ϕ 4 / ϕ 0 +3 σ ϕ 2 ϕ 0 + ϕ 0 3 ,
(84)
f 4 =6 σ ϕ 6 / ϕ 0 2 +8 σ ϕ 4 +6 σ ϕ 2 ϕ 0 2 + ϕ 0 4 .
(85)

Finally, the variance σ ς 2 of the probability of synaptic potentiation over all possible realizations of the coding ratio vector ϕ is given by

σ ς 2 = ς 2 ς 2 .
(86)

References

  1. Little WA: The existence of persistent states in the brain. Math Biosci 1974, 19: 101–120. 10.1016/0025-5564(74)90031-5

    Article  Google Scholar 

  2. Hopfield JJ: Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 1982, 79(8):2554–2558. 10.1073/pnas.79.8.2554

    Article  MathSciNet  Google Scholar 

  3. Wennekers T, Palm G: Modelling generic cognitive functions with operational Hebbian cell assemblies. In Neural Network Research Horizons. Edited by: Weiss M. Nova Science Publishers, New York; 2007:225–294.

    Google Scholar 

  4. Lee AK, Wilson MA: Memory of sequential experience in the hippocampus during slow wave sleep. Neuron 2002, 36(6):1183–1194. 10.1016/S0896-6273(02)01096-6

    Article  Google Scholar 

  5. Diba K, Buzsaki G: Forward and reverse hippocampal place-cell sequences during ripples. Nat Neurosci 2007, 10(10):1241–1242. 10.1038/nn1961

    Article  Google Scholar 

  6. Maier N, Tejero-Cantero Á, Dorrn AL, Winterer J, Beed PS, Morris G, Kempter R, Poulet JF, Leibold C, Schmitz D: Coherent phasic excitation during hippocampal ripples. Neuron 2011, 72: 137–152. 10.1016/j.neuron.2011.08.016

    Article  Google Scholar 

  7. Kammerer A, Tejero-Cantero Á, Leibold C: Inhibition enhances memory capacity: optimal feedback, transient replay and oscillations. J Comput Neurosci 2013, 34: 125–136. 10.1007/s10827-012-0410-z

    Article  MathSciNet  Google Scholar 

  8. Nadal JP: Associative memory: on the (puzzling) sparse coding limit. J Phys A 1991, 24: 1093–1101. 10.1088/0305-4470/24/5/023

    Article  Google Scholar 

  9. Gibson WG, Robinson J: Statistical analysis of the dynamics of a sparse associative memory. Neural Netw 1992, 5: 645–661. 10.1016/S0893-6080(05)80042-5

    Article  Google Scholar 

  10. Willshaw DJ, Buneman OP, Longuet-Higgins HC: Non-holographic associative memory. Nature 1969, 222(5197):960–962. 10.1038/222960a0

    Article  Google Scholar 

  11. Leibold C, Kempter R: Memory capacity for sequences in a recurrent network with biological constraints. Neural Comput 2006, 18(4):904–941. 10.1162/neco.2006.18.4.904

    Article  MathSciNet  Google Scholar 

  12. Hirase H, Recce M: A search for the optimal thresholding sequence in an associative memory. Network 1996, 4: 741–756.

    Article  Google Scholar 

  13. Kapfer C, Glickfeld L, Atallah B, Scanziani M: Supralinear increase of recurrent inhibition during sparse activity in the somatosensory cortex. Nat Neurosci 2007, 10: 743–753. 10.1038/nn1909

    Article  Google Scholar 

  14. Silberberg G, Markram H: Disynaptic inhibition between neocortical pyramidal cells mediated by Martinotti cells. Neuron 2007, 53: 735–746. 10.1016/j.neuron.2007.02.012

    Article  Google Scholar 

  15. Amit Y, Huang Y: Precise capacity analysis in binary networks with multiple coding level inputs. Neural Comput 2010, 22(3):660–688. 10.1162/neco.2009.02-09-967

    Article  MathSciNet  Google Scholar 

  16. Huang Y, Amit Y: Capacity analysis in multi-state synaptic models: a retrieval probability perspective. J Comput Neurosci 2011, 30(3):699–720. 10.1007/s10827-010-0287-7

    Article  MathSciNet  Google Scholar 

  17. Amit DJ, Fusi S: Learning in neural networks with material synapses. Neural Comput 1994, 6: 957–982. 10.1162/neco.1994.6.5.957

    Article  Google Scholar 

  18. Fusi S, Drew PJ, Abbott LF: Cascade models of synaptically stored memories. Neuron 2005, 45(4):599–611. 10.1016/j.neuron.2005.02.001

    Article  Google Scholar 

  19. Leibold C, Kempter R: Sparseness constrains the prolongation of memory lifetime via synaptic metaplasticity. Cereb Cortex 2008, 18: 67–77. 10.1093/cercor/bhm037

    Article  Google Scholar 

  20. Barrett AB, van Rossum MC: Optimal learning rules for discrete synapses. PLoS Comput Biol 2008., 4(11): Article ID e1000230 Article ID e1000230

  21. Päpper M, Kempter R, Leibold C: Synaptic tagging, evaluation of memories, and the distal reward problem. Learn Mem 2011, 18: 58–70.

    Article  Google Scholar 

  22. van Rossum MC, Shippi M, Barrett AB: Soft-bound synaptic plasticity increases storage capacity. PLoS Comput Biol 2012., 8(12): Article ID e1002836 Article ID e1002836

  23. Milekic MH, Alberini CM: Temporally graded requirement for protein synthesis following memory reactivation. Neuron 2002, 36(3):521–525. 10.1016/S0896-6273(02)00976-5

    Article  Google Scholar 

Download references

Acknowledgements

This work was funded by the German Federal Ministry for Education and Research (BMBF) under grant numbers 01GQ0981 (Bernstein Fokus on Neuronal Basis of Learning: Plasticity of Neuronal Dynamics) and 01GQ1004A (Bernstein Center for Computational Neuroscience Munich).

The authors are grateful for comments and discussions to Álvaro Tejero Cantero, Axel Kammerer, and Alexander Mathis.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Medina.

Additional information

Competing Interests

The authors declare that they have no competing interests.

Authors’ Contributions

DM and CL performed the mathematical analysis. DM carried out the computer simulations. DM and CL drafted the manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Medina, D., Leibold, C. Inhomogeneous Sparseness Leads to Dynamic Instability During Sequence Memory Recall in a Recurrent Neural Network Model. J. Math. Neurosc. 3, 8 (2013). https://doi.org/10.1186/2190-8567-3-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-3-8

Keywords