 Research
 Open Access
 Published:
Attractorstate itinerancy in neural circuits with synaptic depression
The Journal of Mathematical Neuroscience volume 10, Article number: 15 (2020)
Abstract
Neural populations with strong excitatory recurrent connections can support bistable states in their mean firing rates. Multiple fixed points in a network of such bistable units can be used to model memory retrieval and pattern separation. The stability of fixed points may change on a slower timescale than that of the dynamics due to shortterm synaptic depression, leading to transitions between quasistable point attractor states in a sequence that depends on the history of stimuli. To better understand these behaviors, we study a minimal model, which characterizes multiple fixed points and transitions between them in response to stimuli with diverse time and amplitudedependencies. The interplay between the fast dynamics of firing rate and synaptic responses and the slower timescale of synaptic depression makes the neural activity sensitive to the amplitude and duration of squarepulse stimuli in a nontrivial, historydependent manner. Weak crosscouplings further deform the basins of attraction for different fixed points into intricate shapes. We find that while shortterm synaptic depression can reduce the total number of stable fixed points in a network, it tends to strongly increase the number of fixed points visited upon repetitions of fixed stimuli. Our analysis provides a natural explanation for the system’s rich responses to stimuli of different durations and amplitudes while demonstrating the encoding capability of bistable neural populations for dynamical features of incoming stimuli.
Introduction
Mounting evidence suggests that neural ensembles can give rise to states of activity that are stable and attractorlike over a short period [1–8]. However, given the range of timescales of neural processes, either slower processes or intrinsic noise typically ensures that an activity state does not remain stable for more than a few hundred milliseconds, even when a stimulus is constant. For example, when viewing images that can give rise to bistable percepts, a switching between the distinct perceived images arises [2, 4, 6, 7]. A similar switching can arise with auditory stimuli [1]. Analysis via hidden Markov modeling [9–15] or changepoint methods [16, 17] has suggested such stateswitching in neural activity in sensory and decisionrelated tasks. Modeling work has shown how discrete attractor states can arise, and how either noise [18–22], or slow adaptationlike processes such as synaptic depression [23, 24], or a combination of the two [25, 26], can lead to transitions between these states, which we refer to as quasistable attractor states [8].
In this article we focus on how shortterm synaptic depression [27–29] can lead to the instability of one quasistable attractor state, inducing a transition to a new state, which itself may be stable or quasistable. Neuronal populations with shortterm synaptic depression have been studied extensively. Spontaneous activity in the auditory cortex can be model by a spatial firing rate model as a result of dynamical synapse [30]. Holcman and Tsodyks [31] considered a single rate model with slow synaptic depression. With varying synaptic coupling weights, the UP and DOWN states are interpreted as fixed points. State transitions can be triggered by noisy fluctuations. The population exhibits a statedependent response to a constant stimulus. Barak and Tsodyks [32] performed a fastslow analysis on a rate model with both shortterm facilitation and depression. They obtained a bifurcation diagram for the synaptic strength vs facilitation index. They found that facilitation enables a slow and reversible transition to persistent firing. Depression, on the other hand, leads to a rapid and transient increase in activity, which was referred to as “population spikes”. Melamed et al. [33] examined slow oscillations (below 5 Hz) induced in an EI rate model with facilitating E to I couplings. They focused on oscillations between UP and DOWN states. It was shown that the oscillation frequency depends on the synaptic time constant and the coupling strength in a pair of EI populations. Moreover, a thorough bifurcation analysis done in Ref. [34] links the stability of UP state to the rich patternformation in a twodimensional neural field with synaptic depression. Here we focus on the history dependence of the response of circuits with many such states (arising from multiple bistable units) in response to a simple input. However, to provide some insight into the mechanism, we begin with a description of the behavior of a single unit and two coupled units in places reiterating the results of others.
Mathematically, if one fixes the amount of synaptic depression by setting a slow, synaptic depression variable to a constant, groups of neurons with strong selffeedback can possess multiple stable discrete attractor states. The system can resemble a relaxation oscillator with sufficiently strong depression and feedback [35] as the depression variable slowly decreases for an active group of neurons, reducing the withingroup effective excitatory coupling until the activity can no longer be maintained. Once inactive, the depression variable slowly recovers, allowing for connections to restrengthen and activity to recommence. In other ranges of parameters, the remnant of such potential oscillatory behavior leads to a rich repertoire of states and state transitions in response to simple stimuli when the stable states of such systems are fixed points.
We characterize such systems with small numbers of potentially bistable groups of neurons via the number of stable fixed points and their basins of attraction. The stable steady states can be used to encode information arriving at the circuit via stimuli with varying duration and amplitude [36]. So the number of discrete attractor states and the statetransition sequence in response to stimuli provide measures of a network’s ability to store dynamical features of stimuli. Therefore, we assess how different fixed points are reached as a function of the amplitude or duration of stimuli, as well as the system’s state before stimulus onset. In particular, we use an extended Wilson–Cowan model [37] and incorporate synaptic depression to show how weak coupling between distinct bistable populations impacts the states’ basins of attraction, which can be deformed into complex shapes. In so doing, we offer an initial explanation of the rich information processing capabilities of highdimensional networks with multiple attractor states and slow synaptic dynamics.
The rest of this paper is organized as follows: We introduce the rate model with synaptic depression in Sect. 2 and derive the dimensionless form that will be used for later analysis. In Sect. 3, we numerically explore the dynamics of small networks whose responses to constant inputs exhibit history dependence. As system size increases, the synaptic depression enables the network to traverse more states and form longer transition sequences under repetitive stimulations. We summarize in Sect. 4 and discuss some open questions for future research.
Model
We consider a network of N neural populations, each of which can be characterized by its mean firing rate \(r_{i}\). The dynamics of the population rate \(r_{i}\) in response to timevarying current \(I_{i} (t )\) is given by a generic form:
Here \(r_{i}^{\max }\) is the maximum firing rate, \(\Theta _{i}\) is the input threshold for the halfmaximum firing rate, and \(\Delta _{i}\) is inversely proportional to the slope of the inputoutput curve. The input current \(I_{i} (t )\) consists of two parts: (1) synaptic currents from the network with a connectivity \(W_{ij}\) which quantifies the coupling strength from population j to population i; (2) an applied current \(I_{i}^{{\mathrm{app}}} (t )\).
The timevarying effective synaptic input \(s_{i}\), arising from a population i, is given as a fraction of the maximum possible (so \(s_{i}\in [0,1 ]\)). We assume spikes are emitted from the population via a Poisson process and include a shortterm synaptic depression factor \(d_{i}\in [0,1 ]\), with 0 indicating a fully depressed synapse. With these assumptions, the mean dynamics of \(s_{i}\) and \(d_{i}\) take the following form [24]:
The parameter \(p_{0}\) gives the fraction of docked vesicles released per spike. ρ is the fraction of open receptors bound by maximal vesicle release such that \(\rho p_{0}d_{i}\) is the fraction of closed synaptic receptors that open, so it is proportional to the increase in the synaptic current for a given presynaptic spike.
The time constants for the mean firing rate, the synaptic current, and the depression variable are denoted respectively as \(\tau _{r}\), \(\tau _{s}\), and \(\tau _{d}\). Since these dynamical variables vary over distinct time scales, it is convenient to rescale the time and to normalize the rate: \(t/\tau _{r}\to t\), \(r_{i}/r_{i}^{\max }\to r_{i}\in [0,1 ]\), as well as to scale the input and threshold by \(\Delta _{i}\): \(I_{i}^{\mathrm{app}}/\Delta _{i}\to I_{i}\), \(W_{ij}/\Delta _{i}\to w_{ij}\), and \(\Theta _{i}/\Delta _{i}\to \theta _{i}\). The dimensionless equations then become
where \(f (x )= (1+e^{x} )^{1}\) is the logistic function, \(\theta _{i}\) is the activation threshold. The weight matrix \(w_{ij}\) determines the coupling strengths within a unit and between units. Two remaining timescales are characterized by \(\alpha =\tau _{r}/\tau _{s}\) and \(\beta =\tau _{r}/\tau _{d}\). In this paper, we assume that the shortterm depression varies over a slow time scale compared with the firing rate and the synaptic current. This situation arises when the timescale for recovery from depression is significantly longer than other time constants, such that \(\tau _{d}\gg \tau _{s}>\tau _{r}\). For example, we set \(\tau _{d}=250\text{ ms}\), \(\tau _{s}=50\text{ ms}\), and \(\tau _{r}=10\text{ ms}\) in simulations, and \(p_{0}=0.5\). The dimensionless parameters
quantify the degree of synaptic depression and the amplitude of synaptic currents, respectively. With slow depression, \(a > b\).
Finally, all cell groups are assumed to be comprised of neurons with identical parameters. For most simulations we choose the standard parameter set: \(a=6.25\), \(b=1.25\), \(w_{ii}=40\), and \(\theta _{i}=5\), unless noted otherwise. In a control scenario [for example, Fig. 2(b1)–(b4)], to demonstrate the importance of synaptic depression, we produce a network without depression by setting \(\tau _{d}\to 0\) thus \(a\to 0\) and \(d_{i}\to 1\). Then the firing rate is solely driven by the synaptic current within a time window of \(\tau _{s}\).
Dynamics
The fixed point solution of N coupled units satisfies
where \(g (r )=f^{1}(r)=\ln [r/ (1r ) ]\) and \(s (r )=\frac{br}{1+ (a+b )r}\) is the steady synaptic current. At a fixed point \(r= (r_{1},\ldots ,r_{N} )^{T}\), the steady values of s and d are given by Eqs. (5) and (6). Linearization at the fixed point leads to a blocked Jacobian matrix
Here, \(\delta _{ij}\) is the Kronecker delta, \(i,j=1,\ldots ,N\).
A hyperbolic fixed point is a saddle with degree k (\(k=0,1,\ldots ,N\)) if there are k eigenvalues of the Jacobian with positive real parts (\(\operatorname{Re}\lambda _{i}>0\), ∀i). Strong selfexcitation can make a single unit bistable (coexistence of two stable nodes and a saddle). For N noninteracting bistable units, the number of saddles with degree k is \(n_{k}=\binom{N}{k}2^{Nk}\), which is choosing k positive real eigenvalues out of N eigenvalues and multiplying the number of remaining \((Nk)\) bistable states. The total number of fixed points is then \(\sum_{k=0}^{N}n_{k}=3^{N}\). Since our focus is on the number of stable states reached in response to successive stimuli, we are primarily concerned with the stability of each fixed point. While the imaginary parts of the eigenvalues of the Jacobian indicate whether a fixed point is approached as a spiral (in an oscillatory manner), such transient behavior does not impact its stability. Therefore, when counting the number of steady states, it suffices to consider only the real parts of the eigenvalues.
For N weaklycoupled bistable populations, the number of saddles grows quickly as N increases, and it outnumbers the stable nodes. The large number of saddles can give rise to heteroclinic sequences or orbits, and therefore more oscillatory firingrate behavior. When N is large, the competition between intra and interpopulation couplings leads to chaotic behaviors [38]. The rich structure of attractors defines a dynamical “landscape” of the neural activity. It is worth mentioning that we consider small networks with weak recurrent connections, which correspond to the multistable region in Ref. [38]. Strong crossconnections inevitably cause stable fixed points to destabilize or to disappear via merging with unstable fixed points. Therefore, there is a tradeoff between the richness gained with random crossconnections and the reduction in the number of stable states that can result.
In this section, we examine the network’s response to constant and repetitive stimuli. We show that shortterm synaptic depression and weak interpopulation couplings facilitate transitions among multiple fixed points.
Historydependent responses to stimuli
The rich dynamical response of a single population has been first observed and systematically discussed in earlier works [31, 32]. Here we revisit the problem focusing on the history dependence under a stimulus
where H is the Heaviside step function, \(I_{\mathrm{app}}\) is the amplitude, \(\tau _{\mathrm{dur}}\) is the duration, and \(t_{0}\) is the onset time.
In the presence of synaptic depression, the final state not only depends on the stimulus duration and amplitude, but also on the initial state; for instance, in Fig. 1, a constant stimulus is given at \(t_{0}=500\). The initial state of the bistable unit can be either OFF (marked by “–”) or ON (marked by “+”). The unit approaches different final states after the stimulus, exhibiting four types of responses: OFFtoOFF (“–/–”), OFFtoON (“–/+”), ONtoOFF (“+/–”), and ONtoON (“+/+”).
Upon receiving a second stimulus, it should be noted that only one combination, OFFtoON (a2) and ONtoOFF (b2), which are marked by stars in Fig. 1, is maximally historydependent in the manner we are interested in, since the same stimulus can induce two different types of switch. Such statedependent and hence historydependent switches will lead to itinerancy in a larger system. Meanwhile, OFFtoON (a3) and ONtoON (b3) transitions indicate the system is bistable, a fact that also leads to a trivial history dependence in that a small stimulus does not cause a state transition (a4)–(b4).
Figure 2(a1) shows the final state as a function of the duration \(\tau _{{\mathrm{dur}}}\) and amplitude \(I_{{\mathrm{app}}}\) of the applied stimulus. The top row indicates a unit with synaptic depression, while the bottom row indicates a unit without synaptic depression. The final state reached from a single pulse when the system starts in the OFF state (column 1) can be different from the final state when we start with an ON state (column 2). [Fig. 2(a2)]. For some values of \(\tau _{{\mathrm{dur}}}\) and \(I_{{\mathrm{app}}}\) [yellow regions in Fig. 2(a3) and (a4)], the state of unit switches twice when applying two identical stimulations. Note that there is also a second yellow region around \((\tau _{{\mathrm{dur}}}\approx 60, I_{{\mathrm{app}}}\approx 1 )\). As mentioned in the above, this switching behavior implies that the system’s responses to constant stimuli are historydependent. The key ingredient here is the synaptic depression. If there is no depression, as shown in Fig. 2(b3) and (b4), there is either only a single transition possible between the ON and the OFF states, or only a single stable state. The unit never switches back and forth under repetitive stimulations.
Basins of attraction deformed by crosscouplings
Even weak interpopulation couplings may deform the attracting basins of fixed points by creating new attractors and annihilating old ones. Shapes and sizes of basins defines the landscape in the state space, which affects how the system traverses through attractorstates before settling to a final state. When a stimulus is applied, the whole landscape shifts. The system’s state at the onset time (the history), the stimulus, and the geometry of basins (due to depression and couplings) jointly determine the evolution. This geometric perspective provides a natural explanation of the historydependent responses.
Take a twounit system as an example, when the crosscoupling is zero (\(w_{ij}=0\)), there are four stable fixed points: both units are OFF, \((0,0 )\); both are ON, \((1,1 )\); one is OFF and the other is ON, \((0,1 )\) and \((1,0 )\). Any initial condition converges to one of the four states as its final stable state. Weak coupling (\(w_{ij}\ll w_{ii}\)) may both change the number of fixed points and their stability. The number and sizes of basins also change.
Figure 3 shows fixed points of two symmetrically coupled units (\(w_{12}=w_{21}\)) with zero input and projected basins in the \(r_{1}\)\(r_{2}\) plane.^{Footnote 1} While cross excitations are enlarging the basin of the \((1,1)\) state (purple region), cross inhibitions quickly shrink it. When \(w_{12}=w_{21}=0.5\), the \((1,1)\) state turns into a saddle, around which the remaining basins deform into a complex structure. The final state thus would depend sensitively on the initial state, as well as and the duration and amplitude of a stimulus. Clearly, weak coupling can destabilize fixed points and thus reduce the number of stable nodes. For example, in Fig. 3(a3), under weak mutual inhibition, the \((1,1)\) state turns into a saddle with an intricate basin. Also from Fig. 3(b), the areas of basins change as a function of the crosscoupling strength. It can be anticipated that with greater excitatory crossconnection strength the \((0,0)\) state will shrink and disappear in a saddlenode bifurcation. For large \(w_{ij}\), the \((1,1)\) state will be the only stable state left.
Subplots in Fig. 3(c1)–(c2) illustrate that without depression, stable fixed points have regularshaped basins of attraction. Note that (c1) has the same coupling weights as in (a3) and (a4), except for the depression variable \(d=1\). In (c2), even strong mutual inhibition (\(w_{ij}=20\)) does not distort the attracting basins.
More reachable states due to depression
We have seen that weak crosscouplings may reduce the number of stable fixed points from the \(2^{N}\) available in the noninteracting system, suggesting they may decrease the information capacity of a network. However, our results suggest that the crosscouplings could lead to nontrivial dynamics, allowing for an increase in the network’s capacity to represent temporal features of stimuli. Here we explore the responses of a network to a sequence of constant stimuli, by measuring the number of final stable states reached after uniform perturbations applied to all units. This number reflects the network’s capacity to encode and maintain information about the number of stimuli it has received.
Figure 4 shows how the final stable state reached by a circuit of five weaklycoupled units can vary according to the amplitude and duration of uniform input provided to all units. Within the circuit, the cross coupling weights \(w_{ij}\) are drawn from a Gaussian distribution with \(\langle w_{ij} \rangle =0\) and \({\mathrm{std}} (w_{ij} )=0.1\). Other parameters are chosen such that each unit is bistable when isolated. Ranging from sharp pulses to sustained currents [Fig. 4(a1)–(a10)], different combinations of durations and amplitudes drive the same initial state^{Footnote 2}\((01001 )\) into ten final states: \((01101 )\), \((11110 )\), \((10110 )\), \((00100 )\), \((00000 )\), \((01000 )\), \((11100 )\), \((11111 )\), \((01101 )\), and \((11101 )\).
We next wished to assess how the number of final states reachable by application of a uniform stimulus (a boxcar stimulus applied equally to all units) depended on the parameters used to produce small networks. To this end, we produced multiple instantiations of networks using random weight matrices. For each network (\(w_{ij}\) fixed), the total number of stable fixed points can be calculated. We perturb an initial state \((01001 )\) by applying constant inputs equally to all units of the network and count the number of distinct steady states after the simulation (i.e., the number of reachable final states) when we vary duration and amplitude of the stimulus. We average across networks to obtain expectation values of the total fixed point number and the number of reachable states as functions of the mean μ and the standard deviation σ of the crossconnections \(w_{ij}\). Moreover, we go on to assess how these results depend on the inclusion of shortterm synaptic depression in our simulations. Specifically, we consider three cases: (1) strong selfcoupling (\(w_{ii}=40\)) with depression, (2) strong selfcoupling (\(w_{ii}=40\)) without depression, and (3) medium selfcoupling (\(w_{ii}=20\)) without depression.
As shown in Fig. 5, the circuits with strong selfcoupling plus depression (case 1) outperform the other two cases in the number of reachable final states (open squares) across a broad range of parametric variation of the random cross coupling matrix. Networks without synaptic depression and medium selfcoupling (case 3, Fig. 5, red open circles) have the same number of total fixed points in circuits with the relatively weak crossconnections tested here. Indeed, since depression can destabilize active states, the total number of stable fixed points can be greater in networks without depression in many other parameter ranges (data not shown). However, the networks without depression have a far smaller repertoire of final states reachable by presentation of uniform stimuli. The networks with strong selfcoupling without depression (case 2) have the poorest performance in both measures, primarily because the networks are near the edge of their bistable region.
Intuitively, by reducing the effective synaptic strength of recurrent synapses of active units, synaptic depression makes it much easier for a stimulus which has activated a unit to subsequently inactivate the same unit (Figs. 1 and 2). Such nonmonotonic responsiveness to stimuli at the singleunit level also revealed that the intricate structure of the basins of attraction of two coupled units (Fig. 3) enhances the repertoire of states reached by repeated stimuli when synaptic depression is included. Adaptation currents would have a very similar impact.
These results indicate that strong selfcoupling combined with synaptic depression provides an underlying mechanism for attractor itinerancy, because extension of the duration of a stimulus more often causes transitions of the network’s activity to a new basin of attraction, leading to a new final stable state. That is, the durationdependence of the final state, most evident in networks with synaptic depression, is an indication of attractorstate itinerancy.
Repeated stimuli cause transitions through sequences of distinct states
In this section, to highlight the history dependence of the attractorstate itinerancy observed in these networks, we examine the networks response to a series of repeated stimuli. As an illustration, let us consider a randomly connected network of ten units receiving such a train of identical inputs. The network’s stable fixed points, as well as its basins of attraction, provide key information for estimating the sequences of states. Thanks to the relatively small size of the system, it is feasible to find all of its stable fixed points. The frequency of occurrence of a given fixed point can be viewed as the probability of finding it in the state space, which is inversely proportional to the size of its attracting basin. Figure 6(a) lists all 38 stable fixed points in a particular tenunit network with \(\langle w_{ij} \rangle =0.2\) and \({\mathrm{std}} (w_{ij} )=1\), sorted according to their frequency of occurrence.
To explore this nonautonomous system, we start with each one of the fixed points and apply a train of constant stimuli with fixed duration (\(\tau _{{\mathrm{dur}}}=20\)) and amplitude (\(I_{{\mathrm{app}}}=1\)). Subsequent stimuli are separated by \(\tau =1000\) time steps to make sure transients are completely settled. We then follow every trajectory in the state space and perform statistics on the number of unique states along trajectories.
Figure 6(b) and (c) illustrates that such trajectories originated from state 13 (marked by a black triangle in (a1)) in two scenarios:
In (b1) and (b2), all ten units receive the same stimuli in circuits with depression (b1) and without depression (b2). In response to the periodic perturbation, the network with depression (b1) falls into a stable cycle, \(13\to 8\to (3\to 4\to 2\to 1\to 31 )\) with a length of five, whereas the network without depression (b2) quickly converges to a steady state (state 6).
In (c1) and (c2), only five randomly chosen units (1, 2, 4, 7, 8) out of the ten units receive the inputs. The randomness induces a period4 sequence in the network with depression (c1): \(13\to 10\to (23\to 7\to 11\to 21 )\). But in (c2) when the depression is absent, the network settles down to a steady state (state 20) after one stimulus. Notice that in both cases, some targeted units get suppressed by weak cross inhibitions. For other random subsets (data not shown here), nontargeted units can be excited due to reciprocal connections in the network.
To assess the generality of such behavior, we count the average length \(\langle \ell \rangle \) as well as the maximum length \(\langle \ell _{\max } \rangle \) of statetransition sequences as a function of the amplitude \(I_{{\mathrm{app}}}\) of repeated stimuli of fixed duration \(\tau _{{\mathrm{dur}}}=20\). The results are summarized in Fig. 7, where we compare two stimulus protocols: either five randomly chosen units receiving inputs (a1, a2) or all ten units receiving inputs (b1, b2). As before, we compare circuits with synaptic depression (blue circles) and without synaptic depression (black circles). In all cases, the network with synaptic depression achieves longer sequences of distinct states.
Trends with increasing network size
We have seen that synaptic depression leads to more state transitions and longer sequences for small random networks. It is tempting to explore the scaling behavior as the network size N increases. In Fig. 8, we estimate the average length \(\langle \ell \rangle \) and the maximum length \(\langle \ell _{\max } \rangle \) of the sequences of distinct states produced by repeated identical stimuli in networks of different sizes. In (a1, b1), a fixed random half of all units receive identical inputs. In (a2, b2), all units receive identical inputs. In (a3, b3), all units receive random inputs that are drawn from an exponential distribution with the same mean (equal to the constant value in a2, b2). The results are qualitatively similar in all three of these conditions of fixed input per stimulus. In all cases the inclusion of synaptic depression (blue circles) leads to longer sequences of distinct states.
Since the number of attractor states scales exponentially with N in the limit of low crosscoupling, one might expect the length of sequences following repeated stimuli to consistently increase with further increases in N. Therefore, we simulated networks with \(N = 20\), \(N=50\), and \(N=100\) and assessed their properties by sampling initial conditions (it is not feasible to test the presence of and characterize all states when they number on the order of 2^{50} or 2^{100}). While our simulations did suggest an exponentially increasing number of attractor states with increasing N, the transient chaos present in the largeN limit [38] reduces the practical use of these states for encoding sequence information. Specifically, the duration of transient dynamics increases with N such that, for example, with \(N=100\) (and \(\langle w_{ij} \rangle =0\), \({\mathrm{std}}(w_{ij})=0.1\)) the majority of initial conditions did not lead to a steady state within five seconds. Therefore, while the number of distinct vectors of network firing rate could increase with successive stimuli (up to 100 or more) the firing rates were not stable, so final states depended sensitively on the interval between stimuli. Such behavior was present in networks both with and without synaptic depression, the main distinction being that larger crossconnections and larger stimuli were needed to produce dynamical responses if depression were absent.
When we only counted state sequences for which activity reached a fixed point within five seconds of each stimulus offset, we found that with an optimal strength of crosscoupling, the lengths of state sequences increased as N increased from 10 to 20 to 50, but then leveled out by \(N = 100\). Specifically, the mean length \(\langle \ell \rangle \) increased from 3.5 to 6.3, then decreased to 5.2 in the networks with depressing synapses, while the mean length increased from 2.3 to 3.4 to 4.5 in the networks without depressing synapses, as N increased from 20 to 50 to 100. Similarly, the acrossnetwork average of the maximum length of state sequence \(\langle \ell _{\max } \rangle \) increased from 7.7 to 12.5 to 13.6 in the networks with depressing synapses and from 3.5 to 6.4 to 11.7 in the networks with depressing synapses as N increased from 20 to 50 to 100. However, if we removed the restriction that steady state should be reached between stimuli, or increased the delay between stimuli, sequences of distinct states of many tens in length were common (following repeated identical stimuli) by \(N=100\).
Discussion
In this paper, we consider small circuits of bistable neural populations with synaptic depression, focusing on the circuit responses to uniform stimuli with different amplitudes and durations. Because of the negative feedback generated by synaptic depression, which operates on a slow time scale in comparison to that for changes in firing rate or synaptic current, the system has an underlying oscillatory component. The oscillatory component can cause an intricate deformation of the basins of attraction that separate the fixed points where individual units are either active or inactive. The final state of the system reached after a perturbing stimulus thus sensitively depends on the properties of the stimulus.
In the absence of crosscoupling, the number of stable fixed points of the system is \(2^{N}\), where N is the number of bistable units. While the number of stable fixed points is maximized in this limit, the lack of interaction between units means the responses to stimuli are rather limited and the history dependence is trivial. Conversely, with very strong crosscouplings, subsets of units become very highly correlated in their activity, reducing the effective N: for example, two units with strong reciprocal crossexcitation are always ON together or OFF together, so act together more like a single unit. We find that with weak crosscouplings, the total number of stable fixed points can remain high, while the interactions between units enables a simple, uniform stimulus (identical to all units) to cause a network response that traces a highdimensional trajectory through the space of units’ activities. The highdimensionality of the response leads to history dependence and richness in the types of stable states achievable by a stimulus that excites all units equally. This behavior allows networks of many units to retain separate information about the amplitude, duration, and the number of identical, repeated stimuli [24, 36].
Our work follows that of others demonstrating the richness of states in networks with coupled units. Prior work showed that in the macroscopic limit, with weak selfcoupling and strong, balanced crosscoupling, a chaotic regime exists [39], whereas when the selfcoupling is strong enough that each unit is bistable, multiple stable states exist and can be reached by transient chaos [38]. Here, we focused on smaller circuits and included the impact of synaptic depression, a common feature of cortical synapses. Synaptic depression can reduce the total number of fixed points by reducing the stability of the ON state (active synapses are effectively weakened by depression). However, the same effect can enhance the number of states reachable by a uniform stimulus, as a weakening of the connections within previously active units allows new units to become ON when the duration of the stimulus is extended. Similarly, such relative destabilization of previously active states enhances the history dependence of stimulus responses and causes the network’s activity to explore a wider range of the state space. We expect that incorporation of firingrate adaptation in the neural responses would have a similar effect in destabilizing active states.
Our results show that the network responses are richer when the successive stimuli target only a subset of the units, instead of all of them. In this study we considered stimuli that target a randomly selected half of the units, with successive stimuli stimulating the same set of units in an identical manner. One may imagine that such selective targeting could reduce the overall repertoire of responses, constraining the ability of individual units to transition from an OFF state to an ON state to those units receiving the stimulus. However, the results of our singleunit studies (Figs. 1 and 2) demonstrate that excitatory input to a unit can switch it from ON to OFF as well as from OFF to ON, and the reciprocal connections within the network allow nonexcited units to change their states.
The dependence of network activity on the duration of stimuli or interval between stimuli is particularly noticeable when intervals on the order of a few hundred milliseconds are present in auditory tasks. Synaptic depression operates on a suitable time scale to produce the ongoing network dynamics that could account for such interval or duration dependence [40].
While our work here focuses on the dynamics of network behavior in the presence of a stimulus which is constant in time, the dependence on initial conditions of the network’s response to a given stimulus imbues the network with history dependence. Therefore, the network can respond differently, according to the number and/or types of and/or order of preceding stimuli [24, 36, 40]. In this manner, such networks could account for the observed transitions of neural activity through a set of distinct attractor states during a counting task [36, 41] and could even provide a basis for contextdependent integration of stimulus properties [42].
Notes
We choose a rate vector \(r_{0}= (r_{1},r_{2} )^{T}\) and set the initial condition as \((r_{0},s (r_{0} ),d (r_{0} ) )\), then label the final state after integrating the differential equations for a long time.
We use a binary code to represent the state vector: 0 and 1 stand for the OFF state and ON state respectively, with firing rates \(r_{{\mathrm{OFF}}}\approx 0.01\) and \(r_{{\mathrm{ON}}}\approx 0.6\).
Abbreviations
 HB:

Hopf bifurcation
 LP:

Limit point
 SN:

Saddlenode
 SHO:

Saddlehomoclinic orbit
References
Snowdon CT. Response of nonhuman animals to speech and to speciesspecific sounds. Brain Behav Evol. 1979;16(5–6):409–29.
Fuster JM, Jervey JP. Inferotemporal neurons distinguish and retain behaviorally relevant features of visual stimuli. Science. 1981;212(4497):952–5.
Funahashi S, Bruce CJ, GoldmanRakic PS. Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex. J Neurophysiol. 1989;61(2):331–49.
Sigala N, Logothetis NK. Visual categorization shapes feature selectivity in the primate temporal cortex. Nature. 2002;415(6869):318.
Leutgeb JK, Leutgeb S, Treves A, Meyer R, Barnes CA, McNaughton BL, Moser MB, Moser EI. Progressive transformation of hippocampal neuronal representations in “morphed” environments. Neuron. 2005;48(2):345–58.
Rotshtein P, Henson RN, Treves A, Driver J, Dolan RJ. Morphing marilyn into maggie dissociates physical and identity face representations in the brain. Nat Neurosci. 2005;8(1):107.
Daelli V, Treves A. Neural attractor dynamics in object recognition. Exp Brain Res. 2010;203(2):241–8.
Miller P. Itinerancy between attractor states in neural systems. Curr Opin Neurobiol. 2016;40:14–22.
Deppisch J, Pawelzik K, Geisel T. Uncovering the synchronization dynamics from correlated neuronal activity quantifies assembly formation. Biol Cybern. 1994;71(5):387–99.
Radons G, Becker J, Dülfer B, Krüger J. Analysis, classification, and coding of multielectrode spike trains with hidden Markov models. Biol Cybern. 1994;71(4):359–73.
Gat I, Tishby N, Abeles M. Hidden Markov modelling of simultaneously recorded cells in the associative cortex of behaving monkeys. Netw Comput Neural Syst. 1997;8(3):297–322.
Otterpohl J, Haynes J, EmmertStreib F, Vetter G, Pawelzik K. Extracting the dynamics of perceptual switching from ‘noisy’ behaviour: an application of hidden Markov modelling to pecking data from pigeons. J Physiol (Paris). 2000;94(5–6):555–67.
Rainer G, Miller EK. Neural ensemble states in prefrontal cortex identified using a hidden Markov model with a modified em algorithm. Neurocomputing. 2000;32:961–6.
Jones LM, Fontanini A, Sadacca BF, Miller P, Katz DB. Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles. Proc Natl Acad Sci USA. 2007;104(47):18772–7.
Escola S, Fontanini A, Katz D, Paninski L. Hidden Markov models for the stimulusresponse relationships of multistate neural systems. Neural Comput. 2011;23(5):1071–132.
Abeles M, Bergman H, Gat I, Meilijson I, Seidemann E, Tishby N, Vaadia E. Cortical activity flips among quasistationary states. Proc Natl Acad Sci USA. 1995;92(19):8616–20.
Latimer KW, Yates JL, Meister ML, Huk AC, Pillow JW. Singletrial spike trains in parietal cortex reveal discrete steps during decisionmaking. Science. 2015;349(6244):184–7.
Miller P, Katz DB. Stochastic transitions between neural states in taste processing and decisionmaking. J Neurosci. 2010;30(7):2559–70.
LitwinKumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012;15(11):1498.
Miller P, Katz DB. Accuracy and responsetime distributions for decisionmaking: linear perfect integrators versus nonlinear attractorbased neural circuits. J Comput Neurosci. 2013;35(3):261–94.
Doiron B, LitwinKumar A. Balanced neural architecture and the idling brain. Front Comput Neurosci. 2014;8:56.
Ashwin P, Creaser J, TsanevaAtanasova K. Sequential escapes: onset of slow domino regime via a saddle connection. Eur Phys J Spec Top. 2018;227(10–11):1091–100.
Kilpatrick ZP, Bressloff PC. Binocular rivalry in a competitive neural network with synaptic depression. SIAM J Appl Dyn Syst. 2010;9(4):1303–47.
Miller P. Stimulus number, duration and intensity encoding in randomly connected attractor networks with synaptic depression. Front Comput Neurosci. 2013;7:59.
MorenoBote R, Rinzel J, Rubin N. Noiseinduced alternations in an attractor network model of perceptual bistability. J Neurophysiol. 2007;98(3):1125–39.
Shpiro A, MorenoBote R, Rubin N, Rinzel J. Balance between noise and adaptation in competition models of perceptual bistability. J Comput Neurosci. 2009;27(1):37.
Tsodyks MV, Markram H. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Natl Acad Sci USA. 1997;94(2):719–23.
Varela JA, Sen K, Gibson J, Fost J, Abbott L, Nelson SB. A quantitative description of shortterm plasticity at excitatory synapses in layer 2/3 of rat primary visual cortex. J Neurosci. 1997;17(20):7926–40.
Tsodyks M, Pawelzik K, Markram H. Neural networks with dynamic synapses. Neural Comput. 1998;10(4):821–35.
Bart E, Bao S, Holcman D. Modeling the spontaneous activity of the auditory cortex. J Comput Neurosci. 2005;19(3):357–78.
Holcman D, Tsodyks M. The emergence of up and down states in cortical networks. PLoS Comput Biol. 2006;2(3):23.
Barak O, Tsodyks M. Persistent activity in neural networks with dynamic synapses. PLoS Comput Biol. 2007;3(2):35.
Melamed O, Barak O, Silberberg G, Markram H, Tsodyks M. Slow oscillations in neural networks with facilitating synapses. J Comput Neurosci. 2008;25(2):308.
Kilpatrick ZP, Bressloff PC. Spatially structured oscillations in a twodimensional excitatory neuronal network with synaptic depression. J Comput Neurosci. 2010;28(2):193–209.
Tabak J, Senn W, O’Donovan MJ, Rinzel J. Modeling of spontaneous activity in developing spinal cord using activitydependent depression in an excitatory network. J Neurosci. 2000;20(8):3041–56.
Ballintyn B, Shlaer B, Miller P. Spatiotemporal discrimination in attractor networks with shortterm synaptic plasticity. J Comput Neurosci. 2019;46(3):279–97.
Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12(1):1–24.
Stern M, Sompolinsky H, Abbott L. Dynamics of random neural networks with bistable units. Phys Rev E. 2014;90(6):062710.
Van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274(5293):1724–6.
Goudar V, Buonomano DV. A model of orderselectivity based on dynamic changes in the balance of excitation and inhibition produced by shortterm synaptic plasticity. J Neurophysiol. 2015;113(2):509–23.
Morcos AS, Harvey CD. Historydependent variability in population dynamics during evidence accumulation in cortex. Nat Neurosci. 2016;19(12):1672.
Mante V, Sussillo D, Shenoy KV, Newsome WT. Contextdependent computation by recurrent dynamics in prefrontal cortex. Nature. 2013;503(7474):78.
Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. 1st ed. vol. 35. New York: Springer; 2010.
Beer RD. On the dynamics of small continuoustime recurrent neural networks. Adapt Behav. 1995;3(4):469–509.
Nan P, Wang Y, Kirk V, Rubin JE. Understanding and distinguishing threetimescale oscillations: case study in a coupled Morris–Lecar system. SIAM J Appl Dyn Syst. 2015;14(3):1518–57.
Acknowledgements
BC acknowledges the financial support from the Swartz Foundation, as well as helpful conversations with Benjamin Ballintyn. Simulations were performed using Brandeis University’s High Performance Computing Cluster which is partially funded by DMRMRSEC 1420382.
Availability of data and materials
The datasets generated and/or analysed during the current study are available at https://github.com/blchen00/attractoritinerancypaper/.
Funding
This work was funded by the Swartz Foundation, Grants #20176 and #20186, and NIH (NINDS) R01NS104818.
Author information
Affiliations
Contributions
PM designed the project. BC carried out the analysis. Both authors wrote this paper, read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Appendix
Appendix
Bifurcation analysis for a single unit
At a fixed point \((r,s,d )\) of a single unit, \(f'=f (1f )=r (1r )\). The stability is captured by the Jacobian (Eq. (9)), whose eigenvalues λ are roots of a cubic characteristic polynomial \(P (\lambda )=\lambda ^{3}+A_{2}\lambda ^{2}+A_{1} \lambda +A_{0}\) with coefficients
Using the Routh–Hurwitz criterion [43], the fixed point is stable (\(\operatorname{Re}\lambda <0\)) if \(A_{0}, A_{2}>0\) and \(A_{1}A_{2}A_{0}\equiv H_{2}>0\). When an eigenvalue becomes zero (\(A_{0}=0\)) or purely imaginary (\(H_{2}=0\) and \(A_{0}, A_{2}>0\)), the fixed point undergoes a saddlenode (SN) or a Hopf bifurcation (HB).
The condition for a saddlenode bifurcation (\(A_{0}=0\)) is equivalent to
Since \(\dot{r}=0\), \(r=f (ws\theta +I )\). This implies that θ must satisfy
Graphing w and θ as two parametric equations in r gives the boundary of a bistable region in the wθ plane [black solid lines in Fig. 5(a)]. Similar wedge boundaries were found in [44] for rate models without synaptic depression.
The boundary lines terminate at a cusp
where a codimension2 bifurcation takes place. The wedge has a width
which scales linearly with w when \(w\gg w_{c}\). Thus it is easier to obtain bistability with stronger selfexcitation.
Since \(A_{2}\) is always positive, the condition for a Hopf bifurcation (\(A_{1}A_{2}=A_{0}>0\)) leads to \(w=P (r )/Q (r )\equiv w_{{\mathrm{HB}}}\)\((r )\) with
Fixing θ and treating \(I_{{\mathrm{app}}}\) as a parameter, we get an equation for \(I_{{\mathrm{app}}}\) at the Hopf bifurcation:
Figure 9(a) illustrates bifurcations of the synaptic current s as a function of the applied stimulus \(I_{{\mathrm{app}}}\) (with w and θ fixed): For large inhibitory input, the OFF state is the only attractor. An unstable ON state and a saddle point emerge from a saddlenode (SN) bifurcation when \(I_{{\mathrm{SN}}_{1}}\approx 0.46\). The ON state tunes stable when a subcritical Hopf bifurcation (HB) takes place at \(I_{{\mathrm{HB}}}\approx 0.07\), which gives rise to an unstable limit cycle around the ON state. When \(I_{{\mathrm{SHO}}}\approx 0.02\), this limit cycle merges with the saddle point via a saddlehomoclinic orbit (SHO). The system is bistable at zero input and remains so until another SN bifurcation at a larger excitation (\(I_{{\mathrm{SN}}_{2}}=0.3\)). Note that oscillatory solutions stem from the slow feedback of the depression.
We separate the fast and the slow variables by assuming r reaches a steady value given a synaptic current s, \(r\approx \bar{r} (s )=f (ws\theta +I )\). The reduced model (s vs d) captures the full model’s dynamics except for slightly shifted Hopf and SHO bifurcations (\(I_{{\mathrm{HB}}}\approx 0.02\) and \(I_{{\mathrm{SHO}}}\approx 0.05\)). Thus to determine the system’s state under constant input, it is sufficient to study the sd model.
In Fig. 9(b), we plot \(w_{HB}\) and \(I_{{\mathrm{app}}}\) as parametric functions in r. The bistable region is above the HB curve (red line) and below the upper boundary of the LP curve (black line). The wedge region ends at a cusp point \((w_{c},I_{c} )= (4b^{1} (a+b+1 ), \theta \ln (a+b+1 )2 )\).
Figure 9(c) shows the fixed point’s bifurcation with varying time constants \(\tau _{s}\) and \(\tau _{d}\), which has similar wedgeshaped structure as in Fig. 9(a). The region between the left boundary of the limit point (LP) curve and the HB curve supports bistable solutions. Hence the bistability is robust for slow depression and a wide range of synaptic time constants.
Figure 9(d) graphs the stable states and manifolds of fixed points in the sd plane. As the input increases, the basin of attraction of the ON state grows quickly. A strong excitatory stimulus may kick the system near the ON state. When the input shuts off, the vector fields and basins of attraction all resume to the case with \(I_{\mathrm{app}}=0\). Then the system’s instantaneous location in the sd plane may be in the small basin of the ON state or the large basin of the OFF state, leading to distinct final states. Similar rebound behavior exists in conductancebased models with slow calcium channels [43] and is a generic feature in fastslow systems [45]. The deformed basins of attraction due to slow depression result in historydependent responses under stimulations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chen, B., Miller, P. Attractorstate itinerancy in neural circuits with synaptic depression. J. Math. Neurosc. 10, 15 (2020). https://doi.org/10.1186/s1340802000093w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1340802000093w
Keywords
 Attractors
 Bistability
 Synaptic depression