Symmetries Constrain Dynamics in a Family of Balanced Neural Networks
 Andrea K. Barreiro^{1}Email author,
 J. Nathan Kutz^{2} and
 Eli Shlizerman^{2}
https://doi.org/10.1186/s1340801700526
© The Author(s) 2017
Received: 12 April 2017
Accepted: 19 September 2017
Published: 10 October 2017
Abstract
We examine a family of random firingrate neural networks in which we enforce the neurobiological constraint of Dale’s Law—each neuron makes either excitatory or inhibitory connections onto its postsynaptic targets. We find that this constrained system may be described as a perturbation from a system with nontrivial symmetries. We analyze the symmetric system using the tools of equivariant bifurcation theory and demonstrate that the symmetryimplied structures remain evident in the perturbed system. In comparison, spectral characteristics of the network coupling matrix are relatively uninformative about the behavior of the constrained system.
Keywords
Recurrent networks Random network Bifurcations Equivariant Symmetry1 Introduction
Networked dynamical systems are of growing importance across the physical, engineering, biological, and social sciences. Indeed, understanding how network connectivity drives network functionality is critical for understanding a broad range of modernday systems including the power grid, communications networks, the nervous system, and social networking sites. All of these systems are characterized by a large and complex graph connecting many individual units, or nodes, each with its own input–output dynamics. In addition to the node dynamics, how such a system operates as a whole depends on the structure of its connectivity graph [1–3], but the connectivity is often so complicated that this structurefunction problem is difficult to solve.
Regardless, an ubiquitous observation across the sciences is that meaningful input/output of signals in highdimensional networks are often encoded in lowdimensional patterns of dynamic activity. This suggests that a central role of the network structure is to produce lowdimensional representations of meaningful activity. Furthermore, since connectivity also drives the underlying bifurcation structure of the networkscale activity, and because both this activity and the relevant features of the connectivity graph are lowdimensional, such networks may admit a tractable structurefunction relationship. Interestingly, the presence of lowdimensional structure may run counter to the intuition provided by the insights of random network theory, which has otherwise proven to be a valuable tool in analyzing large networks.
In considering an excitatory–inhibitory network inspired by neuroscience, we find a novel family of periodic solutions that restrict dynamics to a lowdimensional attractor within a highdimensional phase space. These solutions arise as a consequence of an underlying symmetry in the mean connectivity structure and can be predicted and analyzed using equivariant bifurcation theory. We then show that lowdimensional models of the highdimensional network, which are more tractable for computational bifurcation studies, preserve all the key features of the bifurcation structure. Finally, we demonstrate that these dynamics differ strikingly from the predictions made by random network theory in a similar setting.
Random network theory—in which one seeks to draw conclusions about an ensemble of randomly chosen networks, rather than a specific instance of a network—is particularly relevant to neural networks because such networks are large, underspecified (most connections cannot be measured), and heterogenous (connections are variable both within, and between, organisms). It is particularly tempting to apply the tools of random matrix theory to the connectivity graph, as the spectra of certain classes of random matrices display universal behavior as network size \(N \rightarrow\infty\) [4]. The seminal work of Sompolinsky et al. [5] analyzes a family of singlepopulation firingrate networks in which connections are chosen from a meanzero Gaussian distribution: in the limit of large network size (\(N \rightarrow\infty\)), they find that the network transitions from quiescence to chaos as a global coupling parameter passes a bifurcation value \(g^{\ast} = 1\). This value coincides with the point at which the spectrum of the random connectivity matrix exits the unit circle [6–8], thereby connecting linear stability theory with the full nonlinear dynamics.
Developing similar results for structured, multipopulation networks has proven more challenging. One natural constraint to introduce is that of Dale’s Law: each neuron makes either excitatory or inhibitory connections onto its postsynaptic targets. For a neural network, this constraint is manifested in a synaptic weight matrix with singlesigned columns. If weights are tuned so that incoming excitatory and inhibitory currents approximately cancel (i.e. \(\sum_{j} \mathbf {G}_{ij} \approx0\)), then such a network may be called balanced (we note that our use of the word “balanced” is distinct from the dynamic balance that arises in random networks when excitatory and inhibitory synaptic currents approximately cancel, as studied by [9, 10] and others). Rajan and Abbott [11] studied balanced rank one perturbations of Gaussian matrices and found that, remarkably, the spectrum is unchanged. More recent papers have addressed the spectra of more general lowrank perturbations [12–14], general deterministic perturbations [15], and blockstructured matrices [16].
However, the relationship between linear/spectral and nonlinear dynamics appears to be more complicated than in the unstructured case. Aljadeff et al. [16] indeed find that the spectral radius is a good predictor of qualitative dynamics and learning capacity in networks with blockstructured variances. Others have studied the large network limit, but when mean connectivity scales like \(1/N\) (smaller than the standard deviation \(1/\sqrt{N}\)): therefore, as \(N \rightarrow\infty\), the columns cease to be singlesigned [17–19]. In a recent paper, which studies a balanced network with mean connectivity \(1/\sqrt {N}\), the authors find a slow noiseinduced synchronized oscillation that emerges when a special condition (perfect balance) is imposed on the connectivity matrix [20]. As a growing body of work has continued to connect qualitative features of nonlinear dynamics and learning capacity [21–23], it is crucial to continue to further develop our understanding of how complex nonlinear dynamics emerges in structured heterogeneous networks.
In this paper, we study a family of excitatory–inhibitory networks in which both the mean and variability of connection strengths scale like \(1/\sqrt{N}\). In a small but crucial difference from other recent work [11, 20], we reduce selfcoupling. We show that with this change, these networks exhibit a (heretofore unreported) family of periodic solutions. These solutions arise as a consequence of an underlying symmetry in the mean connectivity structure and can be predicted and analyzed using equivariant bifurcation theory. We show through concrete examples that these periodic orbits can persist in heterogeneous networks, even for large perturbations. Moreover, we demonstrate that lowdimensional models (reducedorder models) can be generated to characterize the highdimensional system and its underlying bifurcation structure; we use the reduced model to study these oscillations as a function of system size N. Thus the work suggests both how biophysically relevant symmetries may play a crucial role in the observable dynamics, and also how reducedorder models can be constructed to more easily study the underlying dynamics and bifurcations.
2 Mathematical Model
We will use the parameter f to identify the fraction of neurons that are excitatory, that is, \(f = n_{E}/N\). The parameter α characterizes the ratio of inhibitorytoexcitatory synaptic strengths: \(\mu_{I} = \alpha\mu_{E}\). We refer to the network as balanced (the mean connectivity into any cell is zero) if \(\alpha= \frac{f}{1f}\); it is inhibitiondominated if \(\alpha> \frac{f}{1f}\). Further, in all cases, \(f=0.8\), reflecting the approximately 80%/20% ratio observed in cortex; the corresponding value of α for a balanced network is \(\alpha= 4\). Finally, we choose \(\sigma_{E}\) and \(\sigma_{I}\) so that the variances of excitatory and inhibitory connections into each cell are equal, that is, \(\sigma_{E}^{2} f = \sigma_{I}^{2} (1f)\).
The matrix H has constant columns, except for the diagonal, which reflects selfcoupling from each cell onto itself. The parameters \(b_{E}\) and \(b_{I}\) give the ratios of self to nonselfconnection strengths for excitatory and inhibitory cells, respectively. We assume that the effect of selfcoupling is to reduce connection strengths, that is, \(0 \le b_{E}, b_{I} \le1\).
We note that as in [5]—but in contrast to later work [11, 20]—selfinteractions can differ from interactions with other neurons, that is, \(\mathbf {G}_{jj} \neq \mathbf {G}_{ij}\). This is a reasonable assumption if we conceptualize each firing rate unit \(x_{j}\) as corresponding to an individual neuron; whereas neurons can have selfsynapses (or autapses [24]), refractory dynamics would tend to suppress selfcoupling from influencing the firing rate.
We begin by considering the “noiseless” system in (1), where \(\sqrt{N} \mathbf {G}= \mathbf {H}\). The solutions that arise in this system can be readily identified because of the underlying symmetries of the network. We find that these solutions actually do arise in numerical simulations: furthermore, they persist even when the symmetry is perturbed (\(\sqrt{N} \mathbf {G}= \mathbf {H}+ \epsilon \mathbf {A}\)).
2.1 Some Preliminary Analysis: Spectrum of H
To analyze stability and detect bifurcations, we will frequently make reference to the Jacobian of (1), (2); when \(\epsilon= 0\), we will find that this always takes on a columnstructured form. We begin by summarizing some facts about the spectra of these matrices.
Lemma 1
\(\mathbf {K}_{N}\) has the following eigenvalues: \(\lambda_{0} = N1\) and \(\lambda_{j} = 1\) with geometric and algebraic multiplicity \(N1\).
Proof
The Jacobian of (1) has the following special structure: except for its diagonal, the entries in column j depend only on the jth coordinate (and are all equal). This leads to a simplification of the spectrum when the cells are divided into synchronized populations. To be precise, we can make the following statement.
Lemma 2
 (1)
\(1a_{k} + b_{k} \) with multiplicity \(n_{I_{k}}1\) for \(k=0,\dots,K\);
 (2)the \(K+1\) remaining eigenvalues coincide with the eigenvalues of the matrix \(\tilde{\mathbf {J}}\):where \(n_{I_{j}}\) is the number of cells in population j. We note that the size of \(\tilde{\mathbf {J}}\) is set by the number of subpopulations, that is, \(\tilde{\mathbf {J}} \in\mathbb{R}^{(K+1) \times(K+1)}\).$$ \tilde{\mathbf {J}}_{ij} = \left \{ \textstyle\begin{array}{l@{\quad}l} n_{I_{j}} a_{j}, & j \neq i,\\ 1+(n_{I_{j}}1) a_{j} + b_{j}, & j =i, \end{array}\displaystyle \right . $$(8)
Proof
 1.
For \(k=0,\dots,K\): there are \(n_{I_{k}}1\) linearly independent eigenvectors given by vectors that (a) have support only on \(I_{k}\) and (b) sum to zero, that is, \(\mathbf {v}^{k}_{j} = 0\) if \(j \notin I_{k}\), and \(\mathbf {v}^{k} \perp \boldsymbol {1}_{N}\).
 2.
The remaining eigenvectors are given by vectors that are constant and nonzero on each index set: \(\mathbf {v}_{j} = c_{k}\) if \(j \in I_{k}\), and \([ c_{0}\ c_{1} \ \cdots\ c_{K} ]\) is an eigenvector of \(\tilde{\mathbf {J}}\). □
We now consider specific examples that are of particular importance.
Example 1
 (1)
\(\lambda_{E} = 1  \frac{g \mu_{E}}{\sqrt{N}}(1b_{E})\) with multiplicity \(n_{E}  1\);
 (2)
\(\lambda_{I} = 1 + \frac{g \alpha\mu_{E}}{\sqrt {N}}(1b_{I})\) with multiplicity \(n_{I}  1\);
 (3)two remaining eigenvalues given by the \(2 \times2\) matrix \(\tilde{\mathbf {J}}\):This is be a complex pair as long as \(n_{E} > ( \alpha(1b_{I}) + 1b_{E} )/4\), so \(\lambda_{1,2} = \lambda\pm i \omega\), where$$ \tilde{\mathbf {J}} = \mathbf {I}+ \frac{g\mu_{E}}{\sqrt{N}} \left [ \begin{matrix} n_{E}  (1b_{E}) & n_{E}\\ n_{E} & n_{E} + \alpha(1b_{I}) \end{matrix} \right ]. $$(9)We note that \(\lambda_{E} < \lambda\equiv\operatorname{Re}(\lambda_{1,2}) < \lambda_{I}\). The eigenvalue associated with the excitatory population, \(\lambda_{E} < 0\) for any value of g.$$ \begin{aligned} \lambda& = 1 + \frac{g \mu_{E}}{\sqrt{N}} \frac{\alpha(1b_{I})  1+b_{E}}{2}, \\ \omega& = \frac{g \mu_{E}}{\sqrt{N}} \sqrt{\alpha(1b_{I}) + 1b_{E}} \sqrt{n_{E}  \frac{\alpha(1b_{I}) + 1b_{E}}{4}}. \end{aligned} $$
 (1)
\(\mathbf {v}_{E} = \operatorname{span} \{ [ \mathbf {v}_{n_{E}} \underbrace{ 0 \ \cdots\ 0 }_{n_{I}} ] \}\), \(\mathbf {v}_{n_{E}} \perp \boldsymbol {1}_{n_{E}}\);
 (2)
\(\mathbf {v}_{I_{1}} = \operatorname{span} \{ [ \underbrace{ 0 \ \cdots\ 0 }_{n_{E}} \mathbf {v}_{n_{I_{1}}} ] \}\), \(\mathbf {v}_{n_{I}} \perp \boldsymbol {1}_{n_{I}}\);
 (3)
\(\mathbf {v}_{\tilde{J}} = \operatorname{span} \{ [ \underbrace{ c_{E} \ \cdots\ c_{E} }_{n_{E}}\ \underbrace{ c_{I} \ \cdots\ c_{I} }_{n_{I}} ] \}\).
We pause to consider two special cases of Example 1. The first is no selfcoupling—\(b_{E}, b_{I} = 0\), which we will examine in detail in the rest of this paper. The second is full selfcoupling—\(b_{E}, b_{I} = 1\), which has been studied previously by many authors [11, 19, 20].
Example 1.1
 (1)
\(\lambda_{E} = 1  \frac{g \mu_{E}}{\sqrt{N}}\) with multiplicity \(n_{E}  1\);
 (2)
\(\lambda_{I} = 1 + \frac{g \alpha\mu_{E}}{\sqrt {N}}\) with multiplicity \(n_{I}  1\);
 (3)two remaining eigenvalues given by the \(2 \times2\) matrix \(\tilde{\mathbf {J}}\):which will be a complex pair as long as \(n_{E} > (\alpha+ 1)/4\), so \(\lambda_{1,2} = \lambda\pm i \omega\), where$$ \tilde{\mathbf {J}} = \mathbf {I}+ \frac{g\mu_{E}}{\sqrt{N}} \left [ \begin{matrix} n_{E}  1 & n_{E}\\ n_{E} & n_{E} + \alpha \end{matrix} \right ], $$(10)We note that \(\lambda_{E} < \lambda\equiv\operatorname{Re}(\lambda_{1,2}) < \lambda_{I}\). The eigenvalue associated with the excitatory population, \(\lambda_{E} < 0\) for any value of g. In the (uncortexlike) situation that the excitatory population were smaller than the inhibitory population (\(\alpha< 1\)), the complex pair would also be stable for all \(\lambda< 0\).$$ \begin{aligned} \lambda& = 1 + \frac{g \mu_{E}}{\sqrt{N}} \frac{\alpha 1}{2}, \\ \omega& = \frac{g \mu_{E}}{\sqrt{N}} \sqrt{\alpha+ 1} \sqrt{n_{E}  \frac{1+ \alpha}{4}}. \end{aligned} $$
Example 1.2
 (1)
Since \(b_{E} = 1\), \(\lambda_{E} = 1\) with multiplicity \(n_{E}  1\);
 (2)
Since \(b_{I} = 1\), \(\lambda_{I} = 1\) with multiplicity \(n_{I}  1\);
 (3)two remaining eigenvalues given by the \(2 \times2\) matrix \(\tilde{\mathbf {J}}\):which also has the (repeated) eigenvalue −1.$$ \tilde{\mathbf {J}} = \mathbf {I}+ \frac{g\mu_{E}}{\sqrt{N}} \begin{bmatrix} n_{E} & n_{E}\\ n_{E} & n_{E} \end{bmatrix} , $$(11)
Example 2
 (1)
\(\lambda_{E} = 1  \frac{g \mu_{E}}{\sqrt{N}} \operatorname {sech}^{2}(g x_{E}) (1b_{E})\) with multiplicity \(n_{E}  1\);
 (2)
\(\lambda_{I_{1}} = 1 + \frac{g \alpha\mu_{E}}{\sqrt {N}} \operatorname {sech}^{2}(g x_{I_{1}})(1b_{I})\) with multiplicity \(n_{I_{1}}  1\);
 (3)
\(\lambda_{I_{2}} = 1 + \frac{g \alpha\mu_{E}}{\sqrt {N}} \operatorname {sech}^{2}(g x_{I_{2}})(1b_{I})\) with multiplicity \(n_{I_{2}}  1\);
 (4)
three remaining eigenvalues given by the \(3 \times3\) matrix \(\tilde{\mathbf {J}}\) described earlier.
 (1)
\(\mathbf {v}_{E} = \operatorname{span} \{ [ \mathbf {v}_{n_{E}} \underbrace{ 0 \ \cdots\ 0 }_{n_{I_{1}}}\ \underbrace{ 0 \ \cdots\ 0 }_{n_{I_{2}}} ] \}\), \(\mathbf {v}_{n_{E}} \perp \boldsymbol {1}_{n_{E}}\);
 (2)
\(\mathbf {v}_{I_{1}} = \operatorname{span} \{ [ \underbrace{ 0 \ \cdots\ 0 }_{n_{E}}\ \mathbf {v}_{n_{I_{1}}} \underbrace{ 0 \ \cdots\ 0 }_{n_{I_{2}}} ] \}\), \(\mathbf {v}_{n_{I_{1}}} \perp \boldsymbol {1}_{n_{I_{1}}}\);
 (3)
\(\mathbf {v}_{I_{2}} = \operatorname{span} \{ [ \underbrace{ 0 \ \cdots\ 0 }_{n_{E}}\ \underbrace{ 0 \ \cdots\ 0 }_{n_{I_{1}}} \mathbf {v}_{n_{I_{2}}} ] \}\), \(\mathbf {v}_{n_{I_{2}}} \perp \boldsymbol {1}_{n_{I_{2}}}\);
 (4)
\(\mathbf {v}_{\tilde{J}} = \operatorname{span} \{ [ \underbrace{ c_{E} \ \cdots\ c_{E} }_{n_{E}}\ \underbrace{ c_{I_{1}} \ \cdots\ c_{I_{1}} }_{n_{I_{1}}}\ \underbrace{ c_{I_{2}} \ \cdots\ c_{I_{2}} }_{n_{I_{2}}} ] \}\).
3 Solution Families Found in Deterministic Network (\(\epsilon= 0\))
Before stating this result, we introduce some terminology. Let Γ be a finite group acting on \(\mathbb{R}^{N}\). Then we say that a mapping \(F: \mathbb{R}^{N} \rightarrow\mathbb{R}^{N}\) is Γequivariant if \(F(\gamma \mathbf {x}) = \gamma F(\mathbf {x})\) for all \(\mathbf {x}\in\mathbb{R}^{N}\) and \(\gamma\in\varGamma\). A oneparameter family of mappings \(F: \mathbb{R}^{N} \times\mathbb{R} \rightarrow \mathbb{R}^{N}\) is Γequivariant if it is Γequivariant for each value of its second argument.
We say that a subspace V of \(\mathbb{R}^{N}\) is Γinvariant if \(\gamma \mathbf {v}\in V\) for any v and \(\gamma\in \varGamma\). We furthermore say that the action of Γ on V is irreducible if V has no proper invariant subspaces, that is, the only Γinvariant subspaces of V are \(\{0\}\) and V itself.
For a group Γ and a vector space V, we define the fixedpoint subspace for Γ, denoted \(\operatorname {Fix}(\varGamma)\), to be all points in V that are unchanged under any of the members of Γ, that is, \(\operatorname{Fix} (\varGamma) = \{ \mathbf {x}\in V : \gamma \mathbf {x}= \mathbf {x}, \forall\gamma\in\varGamma\}\). The isotropy subgroup of \(\mathbf {x}\in V\), denoted \(\varSigma_{x}\), is the set of all members of Γ under which x is fixed, that is, \(\varSigma_{x} = \{ \gamma\in\varGamma: \gamma \mathbf {x}= \mathbf {x}\}\) (we then say that a subgroup Σ is an isotropy subgroup of Γ if it is the isotropy subgroup \(\varSigma_{x}\) for some \(\mathbf {x}\in V\).)
What complicates this situation for Γequivariant mappings—that is, \(F(\mathbf {x},g)\) is Γequivariant for any value of the parameter g—is that because of symmetries, multiple eigenvalues go through zero at once; however, the structural changes that occur are qualitatively the same as those that occur in a nonsymmetric system with a single zero eigenvalue. What changes is that we now have multiple such solution branches, each corresponding to a subgroup of the original symmetries. The following result formalizes this fact.
Theorem 1
Equivariant branching lemma: paraphrased from [26], p. 82, see also pp. 67–69
A similar statement holds for Hopf bifurcations, which we state here because we will appeal to its conclusions regarding the symmetry of periodic solutions.
Theorem 2
Equivariant Hopf theorem: paraphrased from [26], p. 275
Here, the family of mappings is the righthand side of equation (1), with \({\epsilon= 0}\), that is, \(F(\mathbf {x}, g) = \mathbf {x}+ \mathbf {H}\tanh(g\mathbf {x}) /\sqrt{N}\). Let \(\varGamma= S_{n_{E}} \times S_{n_{I}}\), where \(S_{n}\) is the symmetric group on n symbols, that is, we are allowed to permute the labels on the excitatory cells and/or to permute the labels on the inhibitory cells. Each permutation on N objects can be represented as an element in \(GL(N)\), the group of invertible \(N \times N\) matrices; Γ is a finite subgroup of such matrices. It is straightforward to check that F is Γequivariant.^{1}
We note that although Theorem 2 states conditions under which a Hopf bifurcation occurs and the symmetry group of the resulting orbit, it does not indicate whether the bifurcation is subcritical or supercritical. This must be determined by other means; we checked this numerically by computing the first Lyapunov coefficient.

A branch of fixedpoint solutions when \(g^{\ast} = \sqrt {N}/\alpha\mu_{E}\), where the eigenvalues corresponding to the inhibitory population cross zero: here the I cells break into two groups of size \(n_{I1}\) and \(n_{I2}\). Along this fixed point branch, the two groups remain clustered; the excitatory cells also remain clustered, that is, the solution branch can be characterized by \((x_{E},x_{I1},x_{I2})\). We refer to this as the “\(I_{1}/I_{2}\) branch.”

A branch of limit cycles emerging from a Hopf bifurcation when \(g = 2\sqrt{N}/ \mu_{E} (\alpha1)\): here a complex pair crosses the imaginary axis.

A branch of limit cycles from a Hopf bifurcation (at \(g^{H}\)) in which the threecluster pattern is maintained, that is, activity can be characterized by \((x_{E}, x_{I1}, x_{I2})\).

If \(n_{I1} = n_{I2}\), then the excitatory activity along this branch is zero: there may be a further branch point in which \(x_{E}\) moves away from the origin, whereas I cells remain in their distinct clusters.

(Possibly) other fixed point branches, in which one inhibitory cluster (\(x_{I1}\)) breaks into further clusters.
3.1 Branch of Fixed Points (from Trivial Solution)
Thus, the equivariant branching lemma tells us that we can expect a new branch of fixed points in which the inhibitory cells break up into two groups (therefore we refer to this as the “\(I_{1}/I_{2}\) branch”).
3.2 Hopf Bifurcation (on Trivial Solution) Leading to Limit Cycles
3.3 Hopf Bifurcation (on \(I_{1}/I_{2}\) Branch) Leading to Limit Cycles
On the branch \((x_{E}, x_{I_{1}}, x_{I_{2}})\), we find two singularities that lead to new structures. Most significantly, we find a Hopf bifurcation that leads to a branch of limit cycles when a pair of complex eigenvalues crosses the imaginary axis. By Example 2 the corresponding eigenspace is fixed under \(\varSigma= S_{n_{E}} \times S_{n_{I_{1}}} \times S_{n_{I_{2}}}\). Thus, it is a twodimensional subspace of \(\operatorname {Fix}(\varSigma)\); therefore, by the equivariant Hopf theorem, the family of periodic solutions that emerges here also has Σ as its group of symmetries.^{2} Furthermore, in all examples that we encountered, the Hopf bifurcation was supercritical (this was checked numerically); if in addition all other eigenvalues satisfied \(\operatorname{Re}(\lambda) < 0\), then the resulting periodic orbits would be stable.
In general, it is not feasible to solve for \(g^{H}\) symbolically: this requires us to solve for the roots of a cubic polynomial involving exponential functions (e.g. \(\tanh(g x_{E})\)) of implicitly defined parameters \(x_{E}\), \(x_{I_{1}}\), and \(x_{I_{2}}\). However, we can identify the bifurcation numerically (all continuations were performed with MATCONT [28]), and we have found this bifurcation on every specific \(I_{1}/I_{2}\) branch in every specific system we have investigated.
We can also track the branch of Hopf points numerically in the reduced system \((x_{E}, x_{I_{1}}, x_{I_{2}})\) (described in Sect. 4.2), which has the added benefit that the complexity of the system does not increase with N (rather, N is a bifurcation parameter). Here again, we can confirm that the Hopf bifurcation is present in the system for any N and have done so for several example \(n_{I_{1}}/n_{I_{2}}\) ratios in Sect. 4.3.
3.4 Branch Points (on \(I_{1}/I_{2}\) Branch) Leading to New Fixed Point Branch
We may also find branch points on the \((x_{E}, x_{I_{1}}, x_{I_{2}})\) curve in which one of the inhibitory clusters breaks into a further cluster. This occurs if the eigenspace corresponding to \(x_{I_{1}}\), say, has a real eigenvalue going through zero. Because these did not appear to play a significant role in our simulations, we do not consider them further.
3.5 Reduced SelfCoupling
For the remainder of the paper, we focus on absent selfcoupling (\(b_{E}, b_{I} = 0\)); here we note how our conclusions should be modified in a more general case. At the origin, the locations—but not the qualitative behavior—of the bifurcations change. In Example 1.1, a branch point occurs at \(g^{\ast} = \frac{\sqrt{N}}{\alpha\mu_{E}}\); in Example 1, the location now is \(g^{\ast, b} = \frac{\sqrt{N}}{\alpha\mu_{E} (1b_{I})}\) since always \(b_{I} \le1\) and \(g^{\ast,b} \ge g^{\ast}\).
The relative ordering of \(g^{H}\) and \(g^{H,b}\) depends on the relative values of \(b_{E}\) and \(b_{I}\); if \(b_{E}  \alpha b_{I} \le0\), then \(g^{H,b} \ge g^{H}\); otherwise, \(g^{H,b} < g^{H}\). However, we can check that the branch point almost always occurs for a smaller coupling value (than the Hopf point), that is, \(g^{\ast,b} \le g^{H,b}\) with equality if and only if \(b_{E} = 1\).
3.6 InhibitionDominated Networks
In this paper, we have focused on balanced networks (\(\alpha= n_{E}/n_{I}\)). We briefly summarize how our conclusions change in inhibitiondominated networks (\(\alpha> \tilde{\alpha} \equiv n_{E}/n_{I}\)). At the origin, the location of \(g^{\ast}\) is still given by \(\frac{\sqrt{N}}{\alpha\mu_{E} (1b_{I})}\), although now, since \(\alpha> \tilde{\alpha}\), the critical coupling value decreases, that is, \(g^{\ast,b,in} < g^{\ast,b}\).
4 A BifurcationPreserving ReducedOrder Model
In this section, we show that we can construct a reducedorder model that preserves the dynamics and bifurcation structure of the full system, but with a dramatic reduction in the number of degrees of freedom. For a cortexlike ratio of E to I cells, the interesting bifurcations involve the eigenvalues associated with the inhibitory cells or the complex pair. As a result, all the “action” is in the I cells, with the E cells always perfectly synchronized. We can formalize this as follows.
Lemma 3
Any fixed point or periodic solution of (1)–(3) with \(\epsilon= 0\) has a synchronized excitatory population, that is, \(x_{j}(t) = x_{k}(t)\) for any \(j,k \leq n_{E}\).
Proof
The 22 branch has a further branch point, at which the new branch breaks the odd symmetry (i.e. \(x_{I_{2}}=x_{I_{1}}\)) and E cell activity moves away from zero. One further branch occurs, in which one of the two cell clusters breaks apart resulting in a 211 clustering. Why did the 211 branch come off of the 22 branch, rather than of the 31 branch? At this time, we have no principled answer. Finally, the origin has a Hopf bifurcation in which the E and I cells separately cluster (we will refer to this as the “E/I” limit cycle).
We next perform the same continuation on the corresponding reduced (fivedimensional: \(x_{E}\) and \(x_{I_{1}}x_{I_{4}}\)) system. The equilibrium branch structure is shown in Fig. 1C. Up to a permutation of the inhibitory coordinate labels (we do not force the same cell cluster identities to be tracked in both continuations), the curves are identical.
4.1 A Larger System: \(n_{I} = 10\)
4.2 Reduced System: 3Cluster \((x_{E}, x_{I_{1}}, x_{I_{2}})\)
4.3 Scaling with System Size
We can use this reduced system to explore how the system behaves as N increases. The system in Eqs. (23)–(25) allows N to be a continuously varying parameter; therefore, we can vary N while holding all other parameters fixed. Notably, we keep β fixed; thus, we track the behavior of a specific partition ratio of inhibitory cells (such as 1to1 or 3to1) as N increases. When N, \(\frac{N}{\alpha+1}\), and \(\frac{N}{(\beta+1)(\alpha +1)}\) are all positive integers, the reduced system lifts onto an Ncell network; at each such N, we can track the \(I_{1}/I_{2}\) fixed point branch from the known bifurcation point \(g^{\ast} = \sqrt {N}/\alpha\mu_{E}\).
The \(\sqrt{N}\) scaling of \(g^{\ast}\), \(g^{H}\), and \(\omega(g^{H})\) yields insight into the expected behavior of these solutions. First, we should expect these oscillations to become less observable as N increases; \(g^{\ast}\) will eventually reach unphysical values. Second, we should expect the oscillations to become faster as N increases, also eventually reaching an unphysical frequency. Thus, we expect the phenomenon we describe here to be most relevant for smalltomedium N. In the next section, we show that we can easily find an example for \(N=200\); the oscillation period in that example is comparable to the membrane time constant, which is a reasonable upper bound for frequency.
5 Demonstration of Relevance to Random Networks (\(\epsilon> 0\))
We next demonstrate that the bifurcation structure we have described can explain lowdimensional dynamics in example random networks. We return to Eqs. (1) and (2) but now let \(\epsilon> 0\). The righthand side of Eq. (1) can be readily shown to be locally Lipschitz continuous in \(\mathbb{R}^{N}\); thus, solutions will vary continuously as a function of parameters (such as ϵ). In particular, we can expect a hyperbolic periodic orbit at \(\epsilon= 0\) to persist for some range of \(\epsilon \in[0, \epsilon_{0})\); here we numerically demonstrate this persistence.
We chose parameters \(\mu_{E} = 0.7\), \(\sigma_{E}^{2} = 0.625\), and \(\sigma _{I}^{2} = 2.5\). (For \(\epsilon= 1\), the offdiagonal entries of the resulting random matrices are chosen with the same means and variances as in [11].) We performed a series of simulations in which we fixed A and computed solution trajectories for a range of ϵ in between 1 and 0. As ϵ decreases, the network connectivity matrix transitions from full heterogeneity (similar to [11]) to the deterministic case studied earlier.
In Fig. 8C, we show an example from a larger system with \(N=200\). Here \(g=6\); note that a larger coupling value is needed to exceed the bifurcation of the origin at \(g^{\ast} = \sqrt {N}/\alpha\mu_{E}\). A periodic trajectory is evident in all panels. As in the smaller examples, the period of oscillations increases with ϵ.
To highlight that this structure is caused by the mean connectivity H, we repeat the sequence of simulations, but integrating the system without the mean matrix H. The results are shown in Fig. 8D: here the same A, initial condition \(\mathbf {x}_{0}\), values of ϵ, and coupling parameter g were used; therefore the only difference between each panel in Fig. 8D vs. its counterpart in Fig. 8C is the absence of the mean connectivity matrix H. Without H, the origin is stable for ϵ sufficiently small (for \(g=6\), \({\epsilon^{2} < 1/36}\)); hence the zero solutions in the bottom two panels. As \(\epsilon^{2}\) increases beyond that value, we see a fixed point, followed by periodic and then apparently chaotic solutions (for \(\epsilon^{2} > 2^{2}\), a decomposition of the trajectories in terms of principal components requires a large number of orthogonal modes, here in excess of 25). In addition, the characteristic timescale is much longer than in Fig. 8C (note the difference in the time axes).
Finally, we can contrast the nonlinear behavior with the predicted linear behavior by examining the spectra of the connectivity weight matrices. In Figs. 8E and 8F, we plot the eigenvalues of \((\mathbf {H}+ \epsilon \mathbf {A})/\sqrt{N}\) and \(\epsilon \mathbf {A}/\sqrt{N}\), respectively, for the specific networks shown in Figs. 8C–D and for several values of ϵ. When \(\epsilon= 0\), the eigenvalues in Fig. 8E cluster into two locations on the real axis, with the exception of one complex pair, as discussed in Example 1. In contrast, the eigenvalues in Fig. 8F all lie at zero for \(\epsilon= 0\). As ϵ increases, the eigenvalues “fan out” from their point locations until they fill a disc of radius g (here, \(g=6\)). At \(\epsilon= 1\), both matrices have dozens of eigenvalues in the righthalf plane.
The similarity in appearance between the spectra illustrated in Figs. 8E and 8F is partially the result network balance (i.e. that \(\alpha= \frac{f}{1f}\)). The stochastic part of the connectivity matrix G is scaled so that its spectral radius is constant with N; however, as we noted after Example 1.2, the real part of the eigenvalues of the deterministic matrix \(\mathbf {H}/\sqrt{N}\) scale like \(O(1/\sqrt{N})\). Therefore, we expect the stochastic part of the connectivity matrix to dominate the deterministic part as \(N \rightarrow\infty\) when the network is balanced.
6 Discussion
In summary, we studied a family of balanced excitatory–inhibitory firing rate networks that satisfy Dale’s law for arbitrary network size N. When there is no variance in synaptic connections—each excitatory connection has strength \(\frac{\mu_{E}}{\sqrt{N}}\), and each inhibitory connection has strength \(\frac{\mu_{I}}{\sqrt {N}}\)—we find a family of deterministic solutions whose existence can be inferred from the underlying symmetry structure of the network. These solutions persist in the dynamics of networks with quenched variability—that is, variance in the connection strengths— even when the variance is large enough that the envelope of the spectrum of the connectivity matrix approaches that of a Gaussian matrix. This offers a striking example in which linear stability theory is not useful in predicting transitions between dynamical regimes. Given the increasing interest in network science and, in particular, networked dynamical systems, such observations concerning the impact of symmetry of connectivity can be extremely valuable for studying stability, bifurcations, and reducedorder models.
6.1 Role of the Deterministic Perturbation H
Gaussian matrices are a familiar object of study in the random matrix community, where Hermitian random matrices are motivated by questions from quantum physics. Rajan and Abbott [11] studied balanced rank one perturbations of Gaussian matrices and found that the spectrum is largely unchanged. These results have since been extended to more general lowrank perturbations [12, 13]. More recently, Ahmadian et al. [15] studied general deterministic perturbations in the context of neural networks. Similarly, recent work has studied extremal values of the spectrum of matrices with modular structure similar to that found here [14]. Our system is not lowrank: in fact, the (seemingly trivial) change in selfcoupling makes the deterministic weight matrix full rank, as we see from Lemma 2. Using the procedure developed by Ahmadian et al. [15], we can numerically compute the support of spectrum for \(\epsilon> 0\) (not shown): as ϵ grows, this spectral support appears to approach that predicted by a Gaussian matrix or a lowrank perturbation thereof.
However, the more fundamental issue here is that—except for predicting when the origin becomes unstable—the spectrum of the full connectivity matrix is not particularly informative about nonlinear dynamics. Instead, it is the spectrum of the deterministic perturbation that emerges as crucial here; the location of the eigenvalues of this matrix can be used to predict the existence of a family of steady states and limit cycles with very specific structure. In the examples presented here (Fig. 8), these lowdimensional solutions persist even when ϵ is large enough that the spectrum of G is visually indistinguishable from the spectrum of a Gaussian matrix.
It is instructive to compare our findings here with the recent results of del Molino et al. [20], who consider a balanced excitatory–inhibitory system with a similar \(1/\sqrt{N}\) scaling of the mean weights. The authors find a slow noiseinduced oscillation; similarly to our results here, this oscillation arises despite an unstable connectivity matrix. The two systems differ in the deterministic perturbation: del Molino et al. include selfcoupling (their deterministic matrix is rank one), which yields trivial deterministic dynamics without a driving current (in the sense of Example 1.2); thus, they do not see the dynamics described here. Conversely, we do not enforce “perfect balance” \(\sum_{j} \mathbf {G}_{ij} = 0\), which they find to be a necessary condition for the slow oscillation to exist; thus we do not see the oscillations described in that paper. Thus, del Molino et al. [20] and the current work present two distinct examples of dynamics that arise in an excitatory–inhibitory system with \(1/\sqrt{N}\) scaling of the mean weights, where linear stability of the connectivity matrix is not informative about the nonlinear dynamics.
6.2 Relationship to Other Work
The reduced system described in Sect. 4.2 is similar to a simple version of the Wilson–Cowan equations [31, 32] (recently reviewed in [33, 34]). These equations can be interpreted in terms of coupled neural populations and can be derived as a meanfield limit from large networks. A bifurcation analysis of such a meanfield model was performed recently by Hermann and Touboul [17]. Our system differs in two important ways. First, the strong coupling (\(1/\sqrt{N}\)) means that a factor of \(\sqrt{N}\) remains in the reduced equations. Hermann and Touboul, in contrast, pick \(J_{ij} \sim N (\frac{\bar{J}}{N}, \frac{\sigma}{\sqrt{N}} )\); therefore the mean connection strength (\(\frac{\bar{J}}{N}\)) goes to zero faster than the typical deviation from this mean (\(\frac{\sigma}{\sqrt{N}}\)): as N becomes large, outgoing synapses are no longer singlesigned, in violation of Dale’s law. Similarly, Kadmon and Sompolinsky [19] analyze random diluted networks; they show the equivalence to alltoall Gaussian networks with nonzero mean connections that scale like (\(\frac{\bar{J}}{N}\)). If the number of synaptic connections per population is held constant, then dynamic meanfield theory yields predictions for stability valid as \(N \rightarrow\infty\).
In contrast, the reduced system in Sect. 4.2 has no nontrivial limit as \(N \rightarrow\infty\) and is not necessarily a limit or a system average; rather, it simply gives reduced dynamics in a specific invariant subspace. Ultimately, every solution of the reduced system is also a perfectly accurate solution of the original system. The parameter β allows a single equation to capture arbitrary bisections of the inhibitory population; in principle, adding more equations would allow us further branches to be captured. As another consequence of this scaling, the location of bifurcations \(g^{\ast}\) and \(g^{H}\) and the expected frequency of oscillations \(\omega(g^{H})\) will scale like \(\sqrt{N}\); arguably, \(g^{\ast}\) and \(\omega(g^{H})\) will reach unphysical levels as N becomes large.
Finally, stronger mean scaling may underlie another difference from previous work; analyzing networks with \(1/N\) scaling, other authors have found populationlevel oscillations via Hopf bifurcations in reduced equations for mean activity [35, 36]. However, in those works the oscillations are not necessarily observable at the level of individual cell activity (e.g., Fig. 3 in [36]); here, we have distinct celllevel and populationlevel oscillations.
Analysis of spontaneous symmetry breaking enjoys a rich history in mathematical biology and, in particular, in mathematical neuroscience. However, the literature we are aware of identifies symmetrybreaking in structured networks dominated by deterministic behavior. For example, symmetry breaking has been hypothesized to underlie the dynamics of visual hallucinations [37] and ambiguous visual percepts [38], central pattern generators that govern rhythmic behaviors of breathing, eating, and swimming [39–41], and periodic head/limb motions [42–44]. Most recently, Kriener et al. [45] investigate a Dale’s lawconforming orientation model and find that the dynamics are affected by a translation symmetry imposed by the regularity of the cell grid. In contrast, the present paper identifies an important role for symmetry in a family of networks usually thought of as dominated by randomness.
6.3 Future Directions
In this paper, we have focused on analyzing the deterministic system underlying a family of Dale’s lawconforming networks. However, our ultimate interest is in the perturbation away from this system: a full characterization of the dynamics still remains to be completed. Thus far, we have observed more variable behavior in constrained vs. Gaussian networks: at the same coupling parameter g, individual networks display behavior ranging from periodic (as in Fig. 8C) to chaotic, suggesting that this task will be more subtle than for Gaussian networks (also see [20]). Future work will examine this in more detail.
Recent research has focused on the computational power of random networks in the (nominally unpredictable) chaotic regime. Such networks enjoy high computational power because their chaotic dynamics give them access to a rich, complicated phase space, which can be exploited during training to perform complex tasks [21, 46]. It is an open question whether the structure of the networks examined here affects their computational performance on tasks that have been previously studied in Gaussian or other random ensembles. One preliminary study has yielded intriguing results [47]: we integrated networks with one of two oscillatory forcing terms \(I_{1}(t)\) and \(I_{2}(t)\), as described in [23], and compared the performance of these networks on two computational tasks, encoding networkaveraged firing rate with a subpopulation and discriminating the two inputs in phase space. As expected, Gaussian networks performed worse than constrained networks in encoding population firing rates (similarly to what was observed in the balanced networks studied by [23]). However, this difference could not be explained solely by the dimensionality of the solution trajectories (as measured by principal component analysis): constrained networks performed better than Gaussian networks, which required an equal number of principal components to explain their solution trajectories. For the second task, we observed that for constrained networks, the trajectories associated with \(I_{1}\) and \(I_{2}\) appeared to cluster in distinct regions of principal component phase space; this clustering was not observed for Gaussian networks.
Finally, the ideas explored here can be applied to more general network symmetries: for example, a network with several excitatory clusters and global inhibition, or several weakly connected balanced networks [48]. This will both introduce realism and allow the exploration of whether there are some universal features that are implied by the broad features of realistic neural network symmetries such as cortexlike excitatory/inhibitory ratios, spatial range specificity of excitatory vs. inhibitory connections, and so forth. We look forward to reporting on this in future work.
This last direction, in particular, promises to provide further insight into the study of stability and bifurcations in reducedorder models. The work in this paper has highlighted how lowdimensional models of highdimensional networks can be used to understand the underlying bifurcation structures resulting from network connectivity. Such studies are directly relevant to neuroscience, where input–output functionality of extremely highdimensional networks have been demonstrated to be encoded dynamically in lowdimensional subspaces [49–55]. We hope that studies such as this can help highlight both methods for characterizing the collective behavior of networked neurons and the limits of traditional mathematical methods in determining stability of such systems. In either case, the results suggest that further study is needed to understand the role of connectivity in driving networklevel dynamics.
For example, consider \(k \le n_{E}\); then \(F_{k}(\mathbf {x},g) = x_{k}  \frac{\mu_{E}}{\sqrt{N}} \tanh(g x_{k}) + \sum_{j \le n_{E}} \frac{\mu_{E}}{\sqrt{N}} \tanh(g x_{j}) \sum_{j > n_{E}} \frac{\alpha\mu_{E}}{\sqrt{N}} \tanh(g x_{j}) = x_{k}  \frac{\mu _{E}}{\sqrt{N}} \tanh(g x_{k}) + C\), where C is the same for any cell. C is clearly unchanged under any permutation of the labels of the excitatory cells or under any permutation of the inhibitory cells.
Although we do not need this theorem to tell us that a Hopf bifurcation occurs, as the eigenvalue pair is simple, it does guarantee that the resulting solutions have the same symmetry group.
Declarations
Acknowledgements
Not applicable.
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Funding
AKB was supported by a Mathematical Biosciences Institute Early Career Award. ES was supported by NSF/NIGMS DMS1361145 and Washington Research Foundation Fund for Innovation in DataIntensive Discovery. These funding bodies had no role in the design of the study; collection, analysis, and interpretation of computational results; or in writing the manuscript.
Authors’ contributions
AKB, JNK, and ES designed the project. AKB and ES wrote software. AKB performed simulations and bifurcation analyses. AKB designed figures. AKB, JNK, and ES wrote the paper. All authors read and approved the final manuscript.
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Watts DJ, Strogatz SH. Collective dynamics of ‘smallworld’ networks. Nature. 1998;393(6684):440–2. View ArticleMATHGoogle Scholar
 Park HJ, Friston K. Structural and functional brain networks: from connections to cognition. Science. 2013. doi:10.1126/science.1238411. Google Scholar
 Hu Y, Brunton SL, Cain N, Mihalas S, Kutz JN, SheaBrown E. Feedback through graph motifs relates structure and function in complex networks. arXiv:1605.09073 2016.
 Tao T, Vu V, Krishnapur M. Random matrices: universality of ESDs and the circular law. Ann Probab. 2010;38(5):2023–65. MathSciNetView ArticleMATHGoogle Scholar
 Sompolinsky H, Crisanti A, Sommers HJ. Chaos in random neural networks. Phys Rev Lett. 1988;61(3):259–62. MathSciNetView ArticleGoogle Scholar
 Girko V. Circular law. Theory Probab Appl. 1985;29:694–706. View ArticleGoogle Scholar
 Sommers HJ, Crisanti A, Sompolinsky H, Stein Y. Spectrum of large random asymmetric matrices. Phys Rev Lett. 1988;60(19):1895–8. MathSciNetView ArticleGoogle Scholar
 Bai ZD. Circular law. Ann Probab. 1997;25(1):494–529. MathSciNetView ArticleMATHGoogle Scholar
 van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274(5293):1724–6. View ArticleGoogle Scholar
 Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD. The asynchronous state in cortical circuits. Science. 2010;327:587–90. View ArticleGoogle Scholar
 Rajan K, Abbott LF. Eigenvalue spectra of random matrices for neural networks. Phys Rev Lett. 2006;97:188104. View ArticleGoogle Scholar
 Wei Y. Eigenvalue spectra of asymmetric random matrices for multicomponent neural networks. Phys Rev E. 2012;85:066116. View ArticleGoogle Scholar
 Tao T. Outliers in the spectrum of iid matrices with bounded rank perturbations. Probab Theory Relat Fields. 2013;155:231–63. MathSciNetView ArticleMATHGoogle Scholar
 Muir DR, MrsicFlogel T. Eigenspectrum bounds for semirandom matrices with modular and spatial structure for neural networks. Phys Rev E. 2015;91:042808. MathSciNetView ArticleGoogle Scholar
 Ahmadian Y, Fumarola F, Miller KD. Properties of networks with partially structured and partially random connectivity. Phys Rev E. 2015;91:012820. MathSciNetView ArticleGoogle Scholar
 Aljadeff J, Stern M, Sharpee T. Transition to chaos in random networks with celltypespecific connectivity. Phys Rev Lett. 2015;114:088101. View ArticleGoogle Scholar
 Hermann G, Touboul J. Heterogeneous connections induce oscillations in largescale networks. Phys Rev Lett. 2012;109:018702. View ArticleGoogle Scholar
 Cabana T, Touboul J. Large deviations, dynamics and phase transitions in large stochastic and disordered neural networks. J Stat Phys. 2013;153:211–69. MathSciNetView ArticleMATHGoogle Scholar
 Kadmon J, Sompolinsky H. Transition to chaos in random neuronal networks. Phys Rev X. 2015;5:041030. Google Scholar
 Garcia del Molino LC, Pakdaman K, Touboul J, Wainrib G. Synchronization in random balanced networks. Phys Rev E. 2013;88:042824. View ArticleGoogle Scholar
 Sussillo D, Abbott LF. Generating coherent patterns of activity from chaotic neural networks. Neuron. 2009;63:544–57. View ArticleGoogle Scholar
 Rajan K, Abbott LF, Sompolinsky H. Stimulusdependent suppression of chaos in recurrent neural networks. Phys Rev E. 2010;82:011903. View ArticleGoogle Scholar
 Ostojic S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nat Neurosci. 2014;17:594–600. View ArticleGoogle Scholar
 Connelly WM, Lees G. Modulation and function of the autaptic connections of layer V fast spiking interneurons in the rat neocortex. J Physiol. 2010;588:2047–63. View ArticleGoogle Scholar
 Cicogna G. Symmetry breakdown from bifurcation. Lett Nuovo Cimento (2). 1981;31(17):600–2. MathSciNetView ArticleGoogle Scholar
 Golubitsky M, Stewart I, Schaeffer DG. Singularities and groups in bifurcation theory. Vol. II. New York: Springer; 1988. View ArticleMATHGoogle Scholar
 Hoyle RB. Pattern formation: an introduction to methods. Cambridge: Cambridge University Press; 2006. (Cambridge texts in applied mathematics). View ArticleMATHGoogle Scholar
 Dhooge A, Govaerts W, Kuznetsov YA. MATCONT: a MATLAB package for numerical bifurcation analysis of ODEs. ACM Trans Math Softw. 2003;29(3):141–64. MathSciNetView ArticleMATHGoogle Scholar
 Lauterbach R, Matthews P. Do absolutely irreducible group actions have odd dimensional fixed point spaces? arXiv:1011.3986v1 2010.
 Lauterbach R, Schwenker S. Equivariant bifurcations in 4dimensional fixed point spaces. Dyn Syst. 2017;32(1):117–47. MathSciNetView ArticleMATHGoogle Scholar
 Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–23. View ArticleGoogle Scholar
 Wilson HR, Cowan JD. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik. 1973;13:55–80. View ArticleMATHGoogle Scholar
 Ermentrout GB, Terman D. Foundations of mathematical neuroscience. Berlin: Springer; 2010. View ArticleMATHGoogle Scholar
 Bressloff P. Spatiotemporal dynamics of continuum neural fields. J Phys A, Math Theor. 2012;45:033001. MathSciNetView ArticleMATHGoogle Scholar
 Ginzburg I, Sompolinsky H. Theory of correlations in stochastic neural networks. Phys Rev E. 1994;50(4):3171–91. View ArticleGoogle Scholar
 Brunel N, Hakim V. Fast global oscillations in networks of integrateandfire neurons with low firing rates. Neural Comput. 1999;11:1621–71. View ArticleGoogle Scholar
 Ermentrout GB, Cowan JD. A mathematical theory of visual hallucination patterns. Biol Cybern. 1979;34:137–50. doi:10.1007/BF00336965. MathSciNetView ArticleMATHGoogle Scholar
 Diekman CO, Golubitsky M. Network symmetry and binocular rivalry experiments. J Math Neurosci. 2014;4:12. MathSciNetView ArticleMATHGoogle Scholar
 Butera RJ Jr., Rinzel J, Smith JC. Models of respiratory rhythm generation in the preBötzinger complex. I. Bursting pacemaker neurons. J Neurophysiol. 1999;82(1):382–97. Google Scholar
 Marder E, Bucher D. Central pattern generators and the control of rhythmic movements. Curr Biol. 2001;11(23):986–96. View ArticleGoogle Scholar
 Pearson K. Common principles of motor control in vertebrates and invertebrates. Annu Rev Neurosci. 1993;16:265–97. View ArticleGoogle Scholar
 Golubitsky M, Stewart I, Buono PL, Collins JJ. A modular network for legged locomotion. Physica D. 1998;115:56–72. MathSciNetView ArticleMATHGoogle Scholar
 Buono PL, Golubitsky M. Models of central pattern generators for quadruped locomotion. I. Primary gaits. J Math Biol. 2001;42:291–326. MathSciNetView ArticleMATHGoogle Scholar
 Golubitsky M, Shiau LJ, Stewart I. Spatiotemporal symmetries in the disynaptic canalneck projection. SIAM J Appl Math. 2007;67(5):1396–417. MathSciNetView ArticleMATHGoogle Scholar
 Kriener B, Helias M, Rotter S, Diesmann M, Einevoll GT. How pattern formation in ring networks of excitatory and inhibitory spiking neurons depends on the input current regime. Front Comput Neurosci. 2014;7:187. doi:10.3389/fncom.2013.00187. View ArticleGoogle Scholar
 Sussillo D. Neural circuits as computational dynamical systems. Curr Opin Neurobiol. 2014;25:156–63. View ArticleGoogle Scholar
 Barreiro AK, Kutz JN, Shlizerman E. Symmetries constrain the transition to heterogeneous chaos in balanced networks. BMC Neurosci. 2015;16(1):229. View ArticleGoogle Scholar
 LitwinKumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012;15(11):1498–505. View ArticleGoogle Scholar
 Laurent G. Olfactory network dynamics and the coding of multidimensional signals. Nat Rev Neurosci. 2002;3:884–95. doi:10.1038/nrn964. View ArticleGoogle Scholar
 Broome BM, Jayaraman V, Laurent G. Encoding and decoding of overlapping odor sequences. Neuron. 2006;51(4):467–82. View ArticleGoogle Scholar
 Yu BM, Cunningham JP, Santhanam G, Ryu SI, Shenoy KV, Sahani M. Gaussianprocess factor analysis for lowdimensional singletrial analysis of neural population activity. J Neurophysiol. 2009;102:614–35. View ArticleGoogle Scholar
 Machens CK, Romo R, Brody CD. Functional, but not anatomical, separation of “what” and “when” in prefrontal cortex. J Neurosci. 2010;30(1):350–60. View ArticleGoogle Scholar
 Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature. 2012;487(7405):51–6. Google Scholar
 Shlizerman E, Schroder K, Kutz J. Neural activity measures and their dynamics. SIAM J Appl Math. 2012;72(4):1260–91. MathSciNetView ArticleMATHGoogle Scholar
 Shlizerman E, Riffell J, Kutz J. Datadriven inference of network connectivity for modeling the dynamics of neural codes in the insect antennal lobe. Front Comput Neurosci. 2014;8:70. doi:10.3389/fncom.2014.00070. View ArticleGoogle Scholar