Symmetries constrain dynamics in a family of balanced neural networks

We examine a family of random firing-rate neural networks in which we enforce the neurobiological constraint of Dale's Law --- each neuron makes either excitatory or inhibitory connections onto its post-synaptic targets. We find that this constrained system may be described as a perturbation from a system with non-trivial symmetries. We analyze the symmetric system using the tools of equivariant bifurcation theory, and demonstrate that the symmetry-implied structures remain evident in the perturbed system. In comparison, spectral characteristics of the network coupling matrix are relatively uninformative about the behavior of the constrained system.


Introduction
Networked dynamical systems are of growing importance across the physical, engineering, biological, and social sciences. Indeed, understanding how network connectivity drives network functionality is critical for understanding a broad range of modern-day systems including the power grid, communications networks, the nervous system, and social networking sites. All of these systems are characterized by a large and complex graph connecting many individual units, or nodes, each with its own input-output dynamics. In addition to the node dynamics, how such a system operates as a whole depends on the structure of its connectivity graph [1][2][3], but the connectivity is often so complicated that this structure-function problem is difficult to solve.
Regardless, an ubiquitous observation across the sciences is that meaningful input/output of signals in high-dimensional networks are often encoded in lowdimensional patterns of dynamic activity. This suggests that a central role of the network structure is to produce low-dimensional representations of meaningful activity. Furthermore, since connectivity also drives the underlying bifurcation structure of the network-scale activity, and because both this activity and the relevant features of the connectivity graph are low-dimensional, such networks may admit a tractable structure-function relationship. Interestingly, the presence of low-dimensional structure may run counter to the intuition provided by the insights of random network theory, which has otherwise proven to be a valuable tool in analyzing large networks.
In considering an excitatory-inhibitory network inspired by neuroscience, we find a novel family of periodic solutions that restrict dynamics to a low-dimensional attractor within a high-dimensional phase space. These solutions arise as a consequence of an underlying symmetry in the mean connectivity structure and can be predicted and analyzed using equivariant bifurcation theory. We then show that lowdimensional models of the high-dimensional network, which are more tractable for computational bifurcation studies, preserve all the key features of the bifurcation structure. Finally, we demonstrate that these dynamics differ strikingly from the predictions made by random network theory in a similar setting.
Random network theory-in which one seeks to draw conclusions about an ensemble of randomly chosen networks, rather than a specific instance of a networkis particularly relevant to neural networks because such networks are large, underspecified (most connections cannot be measured), and heterogenous (connections are variable both within, and between, organisms). It is particularly tempting to apply the tools of random matrix theory to the connectivity graph, as the spectra of certain classes of random matrices display universal behavior as network size N → ∞ [4]. The seminal work of Sompolinsky et al. [5] analyzes a family of single-population firing-rate networks in which connections are chosen from a mean-zero Gaussian distribution: in the limit of large network size (N → ∞), they find that the network transitions from quiescence to chaos as a global coupling parameter passes a bifurcation value g * = 1. This value coincides with the point at which the spectrum of the random connectivity matrix exits the unit circle [6][7][8], thereby connecting linear stability theory with the full nonlinear dynamics.
Developing similar results for structured, multipopulation networks has proven more challenging. One natural constraint to introduce is that of Dale's Law: each neuron makes either excitatory or inhibitory connections onto its postsynaptic targets. For a neural network, this constraint is manifested in a synaptic weight matrix with single-signed columns. If weights are tuned so that incoming excitatory and inhibitory currents approximately cancel (i.e. j G ij ≈ 0), then such a network may be called balanced (we note that our use of the word "balanced" is distinct from the dynamic balance that arises in random networks when excitatory and inhibitory synaptic currents approximately cancel, as studied by [9,10] and others). Rajan and Abbott [11] studied balanced rank one perturbations of Gaussian matrices and found that, remarkably, the spectrum is unchanged. More recent papers have addressed the spectra of more general low-rank perturbations [12][13][14], general deterministic perturbations [15], and block-structured matrices [16].
However, the relationship between linear/spectral and nonlinear dynamics appears to be more complicated than in the unstructured case. Aljadeff et al. [16] indeed find that the spectral radius is a good predictor of qualitative dynamics and learning capacity in networks with block-structured variances. Others have studied the large network limit, but when mean connectivity scales like 1/N (smaller than the standard deviation 1/ √ N ): therefore, as N → ∞, the columns cease to be single-signed [17][18][19]. In a recent paper, which studies a balanced network with mean connectivity 1/ √ N , the authors find a slow noise-induced synchronized oscillation that emerges when a special condition (perfect balance) is imposed on the connectivity matrix [20]. As a growing body of work has continued to connect qualitative features of nonlinear dynamics and learning capacity [21][22][23], it is crucial to continue to further develop our understanding of how complex nonlinear dynamics emerges in structured heterogeneous networks.
In this paper, we study a family of excitatory-inhibitory networks in which both the mean and variability of connection strengths scale like 1/ √ N . In a small but crucial difference from other recent work [11,20], we reduce self-coupling. We show that with this change, these networks exhibit a (heretofore unreported) family of periodic solutions. These solutions arise as a consequence of an underlying symmetry in the mean connectivity structure and can be predicted and analyzed using equivariant bifurcation theory. We show through concrete examples that these periodic orbits can persist in heterogeneous networks, even for large perturbations. Moreover, we demonstrate that low-dimensional models (reduced-order models) can be generated to characterize the high-dimensional system and its underlying bifurcation structure; we use the reduced model to study these oscillations as a function of system size N . Thus the work suggests both how biophysically relevant symmetries may play a crucial role in the observable dynamics, and also how reduced-order models can be constructed to more easily study the underlying dynamics and bifurcations.

Mathematical Model
We consider a network in which each node represents the firing rate of a single neuron, connected by sigmoidal activation functions through a random weight matrix. This is the model studied in Refs. [5,11,20] with some important modifications that we further detail. Specifically, we analyze the family of random networkṡ where with an N × N matrix H such that and We assume that μ I < 0 < μ E ; therefore the first n E nodes are excitatory, and the remainder are inhibitory. We will use the parameter f to identify the fraction of neurons that are excitatory, that is, f = n E /N . The parameter α characterizes the ratio of inhibitory-to-excitatory synaptic strengths: μ I = −αμ E . We refer to the network as balanced (the mean connectivity into any cell is zero Further, in all cases, f = 0.8, reflecting the approximately 80%/20% ratio observed in cortex; the corresponding value of α for a balanced network is α = 4. Finally, we choose σ E and σ I so that the variances of excitatory and inhibitory connections into each cell are equal, that is, The matrix H has constant columns, except for the diagonal, which reflects selfcoupling from each cell onto itself. The parameters b E and b I give the ratios of self-to non-self-connection strengths for excitatory and inhibitory cells, respectively. We assume that the effect of self-coupling is to reduce connection strengths, that is, We note that as in [5]-but in contrast to later work [11,20]-self-interactions can differ from interactions with other neurons, that is, G jj = G ij . This is a reasonable assumption if we conceptualize each firing rate unit x j as corresponding to an individual neuron; whereas neurons can have self-synapses (or autapses [24]), refractory dynamics would tend to suppress self-coupling from influencing the firing rate.
We find that many features of the resulting dynamics may be connected to an approximate symmetry of the system. Specifically, if you remove the "noise" from the connectivity matrix G-so that then the subspace in which all E neurons have the same activity and all I neurons have the same activity, that is, is invariant under the dynamicsẋ = −x + G tanh(gx). To be precise, the system of equations is equivariant under the group of permutation symmetries (S n E × S n I ), which contains any permutation of the n E excitatory neurons and any permutation of the n I inhibitory neurons.
We begin by considering the "noiseless" system in (1), where √ N G = H. The solutions that arise in this system can be readily identified because of the underlying symmetries of the network. We find that these solutions actually do arise in numerical simulations: furthermore, they persist even when the symmetry is perturbed ( √ N G = H + A).

Some Preliminary Analysis: Spectrum of H
To analyze stability and detect bifurcations, we will frequently make reference to the Jacobian of (1), (2); when = 0, we will find that this always takes on a columnstructured form. We begin by summarizing some facts about the spectra of these matrices.
Let K N be the matrix of all ones except on the diagonal. that is, where 1 N ∈ R N is the vector of all ones, and I N is the N × N identity matrix.
Proof By inspection, 1 N is an eigenvector with corresponding eigenvalue N − 1 (as each row sums to N − 1). The remaining eigenvectors must satisfy since each such eigenvector is orthogonal to 1 N .
The Jacobian of (1) has the following special structure: except for its diagonal, the entries in column j depend only on the j th coordinate (and are all equal). This leads to a simplification of the spectrum when the cells are divided into synchronized populations. To be precise, we can make the following statement.
that is, J has constant columns (except for the diagonal), with the value in each column determined by the population identity. Then the eigenvalues of J are: (1) −1 − a k + b k with multiplicity n I k − 1 for k = 0, . . . , K; (2) the K + 1 remaining eigenvalues coincide with the eigenvalues of the matrixJ: where n I j is the number of cells in population j . We note that the size ofJ is set by the number of subpopulations, that is, Proof This can be checked by direct computation: 1. For k = 0, . . . , K: there are n I k − 1 linearly independent eigenvectors given by vectors that (a) have support only on I k and (b) sum to zero, that is, v k j = 0 if j / ∈ I k , and v k ⊥ 1 N . 2. The remaining eigenvectors are given by vectors that are constant and nonzero on each index set: v j = c k if j ∈ I k , and [c 0 c 1 · · · c K ] is an eigenvector ofJ.
We now consider specific examples that are of particular importance.

Example 1
We consider a balanced network with (possibly) reduced self-coupling: α = n E n I and 0 ≤ b E , b I ≤ 1. The origin x = 0 is a fixed point of (1) for all g. Therefore, we can think of the population as consisting of two synchronized populations, excitatory and inhibitory, that is, n 0 = n E and n 1 = n I ; Then the Jacobian has the following eigenvalues: (1 − b I ) with multiplicity n I − 1; (3) two remaining eigenvalues given by the 2 × 2 matrixJ: This is be a complex pair as long as n E > (α (1− We note that λ E < λ ≡ Re(λ 1,2 ) < λ I . The eigenvalue associated with the excitatory population, λ E < 0 for any value of g.
The corresponding eigenvectors are: We pause to consider two special cases of Example 1. The first is no selfcoupling-b E , b I = 0, which we will examine in detail in the rest of this paper. The second is full self-coupling-b E , b I = 1, which has been studied previously by many authors [11,19,20]. Example 1. 1 We consider Example 1, but with no self-coupling: b E , b I = 0. Then at the origin x = 0, the Jacobian has the following eigenvalues: with multiplicity n E − 1; (2) λ I = −1 + gαμ E √ N with multiplicity n I − 1; (3) two remaining eigenvalues given by the 2 × 2 matrixJ: which will be a complex pair as long as n E > (α + 1)/4, so λ 1,2 = λ ± iω, where We note that λ E < λ ≡ Re(λ 1,2 ) < λ I . The eigenvalue associated with the excitatory population, λ E < 0 for any value of g. In the (un-cortex-like) situation that the excitatory population were smaller than the inhibitory population (α < 1), the complex pair would also be stable for all λ < 0.
Example 1. 2 We consider Example 1, but where self-coupling is not reduced: b E , b I = 1. Consider the eigenvalues at the origin x = 0 described in Example 1: (2) Since b I = 1, λ I = −1 with multiplicity n I − 1; (3) two remaining eigenvalues given by the 2 × 2 matrixJ: which also has the (repeated) eigenvalue −1.
Thus, every eigenvalue of H is −1; crucially, this does not depend on the coupling parameter g. In Sect. 3, we describe how, by varying g, bifurcations occur at the origin; these cannot occur if self-coupling is not reduced, since the eigenvalues of the Jacobian cannot pass through the imaginary axis.
(Another way to reach the same conclusion is to notice that H is a rank-one matrix [11]: with at most one nonzero eigenvalue; since μ E n E + μ I n I = 0, this last eigenvalue is zero as well.) It is worth pointing out that in Example 1, the O(1/ √ N) scaling of the complex eigenvalue λ ± ıω relies on balanced coupling; the N -dependent part of the trace of the matrix in equation (10), which is Example 2 Next, suppose that the cells have broken into three synchronized populations: the excitatory cells (n E cells with activity x E (t)) and two groups of inhibitory cells (n I 1 and n I 2 cells with activities x I 1 and x I 2 , respectively). Then n 0 = n E , n 1 = n I 1 , and 1,2 . Therefore the Jacobian has the following eigenvalues: (4) three remaining eigenvalues given by the 3 × 3 matrixJ described earlier.
We note that always λ E < 0 as long as b E ≤ 1. The corresponding eigenvectors are:

Solution Families Found in Deterministic Network ( = 0)
In this section, we use equivariant bifurcation theory to identify which solutions we expect to arise in system (1), where G = H/ √ N . We also demonstrate that these solutions actually arise in a small network where it is tractable to do numerical continuation to verify our calculations. Our main tool is the equivariant branching lemma, which tells us which type of solutions (to the equationẋ = F (x, g)) arises at bifurcation points when symmetries are present [25][26][27]. For example, we might take F to be the function introduced in Sect. 2: Before stating this result, we introduce some terminology. Let Γ be a finite group acting on R N . Then we say that a mapping F : For a group Γ and a vector space V , we define the fixed-point subspace for Γ , denoted Fix(Γ ), to be all points in V that are unchanged under any of the members of Γ , that is, Suppose we have a one-parameter family of mappings F (x, g) and we wish to Then the implicit function theorem states that we can continue to track a unique solution branch as a function of g as long as the Jacobian remains invertible. When this ceases to be true-when (dF ) x,g has a nontrivial kernel-we have the possibility for a bifurcation. At this point the number of zero eigenvalues (whether there are one, or two, etc.) and a menagerie of further conditions determine the qualitative properties of the structural change that occurs. What complicates this situation for Γ -equivariant mappings-that is, F (x, g) is Γ -equivariant for any value of the parameter g-is that because of symmetries, multiple eigenvalues go through zero at once; however, the structural changes that occur are qualitatively the same as those that occur in a nonsymmetric system with a single zero eigenvalue. What changes is that we now have multiple such solution branches, each corresponding to a subgroup of the original symmetries. The following result formalizes this fact.
Theorem 1 (Equivariant branching lemma: paraphrased from [26], p. 82, see also pp. 67-69) Let F : R N × R → R N be a one-parameter family of Γ -equivariant mappings with F (x 0 , g 0 ) = 0. Suppose that (x 0 , g 0 ) is a bifurcation point and that, defining V = ker(dF ) x 0 ,g 0 , Γ acts absolutely irreducibly on V . Let Σ be an isotropy subgroup of Γ satisfying where Then there exists a unique smooth solution branch to F = 0 such that the isotropy subgroup of each solution is Σ.
A similar statement holds for Hopf bifurcations, which we state here because we will appeal to its conclusions regarding the symmetry of periodic solutions.
Theorem 2 (Equivariant Hopf theorem: paraphrased from [26], p. 275) Let F be a one-parameter family of Γ -equivariant mappings with F (x 0 , g 0 ) = 0. Suppose that (dF ) x 0 ,g 0 has one or more pairs of complex eigenvalues ρ ± iω that satisfy ρ(g 0 ) = 0 (i.e. the eigenvalues are pure imaginary at g 0 ) and ρ (g 0 ) = 0. Let V be the corresponding real (i.e. not generalized) eigenspace. Let Σ be an isotropy subgroup of Γ satisfying where Then there exists a unique branch of small-amplitude periodic solutions (with period 2π/ω) having Σ as their group of symmetries.
Here, the family of mappings is the right-hand side of equation (1), where S n is the symmetric group on n symbols, that is, we are allowed to permute the labels on the excitatory cells and/or to permute the labels on the inhibitory cells. Each permutation on N objects can be represented as an element in GL(N ), the group of invertible N × N matrices; Γ is a finite subgroup of such matrices. It is straightforward to check that F is Γ -equivariant. 1 We note that although Theorem 2 states conditions under which a Hopf bifurcation occurs and the symmetry group of the resulting orbit, it does not indicate whether the bifurcation is subcritical or supercritical. This must be determined by other means; we checked this numerically by computing the first Lyapunov coefficient.
Since our model satisfies the assumptions of the equivariant branching lemma, it remains to identify potential bifurcation points (we concentrate on absent selfcoupling, that is, b E , b I = 0). From the trivial solution (x = 0) we expect solutions to arise when the eigenvalues of −I + gH/ √ N cross the imaginary axis. In particular, we expect, in order of increasing g, • A branch of fixed-point solutions when g * = √ N/αμ E , where the eigenvalues corresponding to the inhibitory population cross zero: here the I cells break into two groups of size n I 1 and n I 2 . Along this fixed point branch, the two groups remain clustered; the excitatory cells also remain clustered, that is, the solution branch can be characterized by (x E , x I 1 , x I 2 ). We refer to this as the "I 1 /I 2 branch." where C is the same for any cell. C is clearly unchanged under any permutation of the labels of the excitatory cells or under any permutation of the inhibitory cells.
• A branch of limit cycles emerging from a Hopf bifurcation when g = 2 √ N/ μ E (α − 1): here a complex pair crosses the imaginary axis.
From each I 1 /I 2 branch we find: • A branch of limit cycles from a Hopf bifurcation (at g H ) in which the three-cluster pattern is maintained, that is, activity can be characterized by (x E , x I 1 , x I 2 ). • If n I 1 = n I 2 , then the excitatory activity along this branch is zero: there may be a further branch point in which x E moves away from the origin, whereas I cells remain in their distinct clusters. • (Possibly) other fixed point branches, in which one inhibitory cluster (x I 1 ) breaks into further clusters.

Branch of Fixed Points (from Trivial Solution)
The first opportunity for a bifurcation from the trivial solution occurs when g * = √ N/αμ E : at this value of g, n I − 1 eigenvalues pass through zero: the corresponding eigenspace (from Example 1) is the set of all zero-sum vectors with support in the inhibitory cells only, that is, To check that Γ acts irreducibly on V , it is sufficient to show that the subspace spanned by the orbit of a single vector v (defined as the set of all values γ v for γ ∈ Γ ) is full rank; this can be readily confirmed for v n I = [1 −1 0 · · · 0], for example.
Suppose we break the inhibitory cells up into precisely two clusters; we allow all permutations within each cluster, but no longer allow mixing between the clusters. This describes a subgroup of Γ , Σ = S n E × S n I 1 × S n I 2 , n I 1 + n I 2 = n I . Assuming that (without loss of generality) the I 1 neurons have the indices n E + 1, . . . , n E + n I 1 , and so forth, Σ has the fixed point subspace We can check that Fix(Σ) is a subspace of V ; furthermore dim Fix(Σ) = 1 because it can be described as the span of a single vector. Thus, the equivariant branching lemma tells us that we can expect a new branch of fixed points in which the inhibitory cells break up into two groups (therefore we refer to this as the "I 1 /I 2 branch").
If the clusters are of equal size (n I 1 = n I 2 ), then the solution branch shows the pattern (0, x I 1 , −x I 1 ) (by uniqueness it suffices to show that such a branch exists). To see this, first observe that .
If x I 2 = −x I 1 , then tanh(gx I 2 ) = − tanh(gx I I ), and therefore Since Returning to the inhibitory degrees of freedom, we see their equations are now decoupled: A fixed point has three possible solutions. If g > g * ; then one is x I 1,2 = 0, whereas the others can be found by inverting a simple expression relating g and x I 1 along the solution branch: Thus, we can solve for g as a function of x I 1 > 0 and set x I 2 = −x I 1 ; checking the Taylor expansion of Eq. (16) confirms that x I 1 → 0 as g → g * .

Hopf Bifurcation (on Trivial Solution) Leading to Limit Cycles
The trivial solution is next expected to have a bifurcation when the complex pair of eigenvalues of −I + gH/ √ N crosses the imaginary axis, that is, when This is a simple eigenvalue pair, with real eigenspace (again by Example 1) consisting of vectors with all E cells synchronized and all I cells synchronized. This is a twodimensional vector space: therefore, we expect a branch of periodic solutions to arise in which the excitatory neurons and inhibitory neurons are each synchronized. Here Σ = Γ = S n E × S n I .

Hopf Bifurcation (on I 1 /I 2 Branch) Leading to Limit Cycles
On the branch (x E , x I 1 , x I 2 ), we find two singularities that lead to new structures. Most significantly, we find a Hopf bifurcation that leads to a branch of limit cycles when a pair of complex eigenvalues crosses the imaginary axis. By Example 2 the corresponding eigenspace is fixed under Σ = S n E × S n I 1 × S n I 2 . Thus, it is a two-dimensional subspace of Fix(Σ); therefore, by the equivariant Hopf theorem, the family of periodic solutions that emerges here also has Σ as its group of symmetries. 2 Furthermore, in all examples that we encountered, the Hopf bifurcation was supercritical (this was checked numerically); if in addition all other eigenvalues satisfied Re(λ) < 0, then the resulting periodic orbits would be stable.
In general, it is not feasible to solve for g H symbolically: this requires us to solve for the roots of a cubic polynomial involving exponential functions (e.g. tanh(gx E )) of implicitly defined parameters x E , x I 1 , and x I 2 . However, we can identify the bifurcation numerically (all continuations were performed with MATCONT [28]), and we have found this bifurcation on every specific I 1 /I 2 branch in every specific system we have investigated.
We can also track the branch of Hopf points numerically in the reduced system (x E , x I 1 , x I 2 ) (described in Sect. 4.2), which has the added benefit that the complexity of the system does not increase with N (rather, N is a bifurcation parameter). Here again, we can confirm that the Hopf bifurcation is present in the system for any N and have done so for several example n I 1 /n I 2 ratios in Sect. 4.3.

Branch Points (on I 1 /I 2 Branch) Leading to New Fixed Point Branch
We may also find branch points on the (x E , x I 1 , x I 2 ) curve in which one of the inhibitory clusters breaks into a further cluster. This occurs if the eigenspace corresponding to x I 1 , say, has a real eigenvalue going through zero. Because these did not appear to play a significant role in our simulations, we do not consider them further.

Reduced Self-Coupling
For the remainder of the paper, we focus on absent self-coupling (b E , b I = 0); here we note how our conclusions should be modified in a more general case. At the origin, the locations-but not the qualitative behavior-of the bifurcations change. In Example 1.1, a branch point occurs at g * = √ N αμ E ; in Example 1, the location now is g * ,b = √ N αμ E (1−b I ) since always b I ≤ 1 and g * ,b ≥ g * . Similarly, the Hopf bifurcation that occurs at with no self-coupling will now occur at (9) for λ).
The relative ordering of g H and g H,b depends on the relative values of b E and b I ; if b E − αb I ≤ 0, then g H,b ≥ g H ; otherwise, g H,b < g H . However, we can check that the branch point almost always occurs for a smaller coupling value (than the Hopf point), that is, g * ,b ≤ g H,b with equality if and only if b E = 1.

Inhibition-Dominated Networks
In this paper, we have focused on balanced networks (α = n E /n I ). We briefly summarize how our conclusions change in inhibition-dominated networks (α >α ≡ n E /n I ). At the origin, the location of g * is still given by √ N αμ E (1−b I ) , although now, since α >α, the critical coupling value decreases, that is, g * ,b,in < g * ,b .
In Eqs. (9) and (10), the condition that n E = αn I has been used; to remove this restriction, replace any instance of n E in the right column ofJ with αn I . The condition for a Hopf bifurcation to occur at the origin now is (using the trace ofJ from Eq. (9)): Thus the Hopf bifurcation still occurs as long as inhibition is not too strong (as measured by α −α); however, this depends on N .

A Bifurcation-Preserving Reduced-Order Model
In this section, we show that we can construct a reduced-order model that preserves the dynamics and bifurcation structure of the full system, but with a dramatic reduction in the number of degrees of freedom. For a cortex-like ratio of E to I cells, the interesting bifurcations involve the eigenvalues associated with the inhibitory cells or the complex pair. As a result, all the "action" is in the I cells, with the E cells always perfectly synchronized. We can formalize this as follows.

Lemma 3 Any fixed point or periodic solution of (1)-(3) with = 0 has a synchronized excitatory population, that is, x j (t) = x k (t) for any j, k ≤ n E .
Proof Consider the activity of two distinct E cells, say x 1 and x 2 . Then The first numbered line, (17), contains so few terms because everything depending on other variables (x 3 , and so forth) cancels out; the second numbered line, (18), uses a sum identity for the tanh function. Then with equality if and only if x 1 = x 2 . In the last line, we use the facts that x tanh(gx) ≥ 0 and (1 − tanh(gx) tanh(gy)) ≥ 0 for any real numbers x, y, and g > 0. Therefore the distance x 1 − x 2 always decreases along a trajectory, unless already x 1 = x 2 .
As a consequence, any fixed point or period solution present in the full system is also present in the following reduced system, where we collapse all of the excitatory degrees of freedom into one x E (assuming that b E = 0): Here, n I is fixed, whereas N = (α + 1)n I is a parameter of the system. This allows us to explore solutions in an (n I + 1)-dimensional system rather than an (α + 1)n Idimensional system. We first demonstrate the bifurcation structure on a small network with N = 20, in which we can comfortably confirm our findings on the full system with numerical continuation (all diagrams shown here were computed using MATCONT [28]). We treat the global coupling strength g as our bifurcation parameter: the origin is an equilibrium point for all g. At g * = √ N/αμ E , n I − 1 eigenvalues pass through the origin ( = 0). This is a branch point: because of symmetry, there exists a branch corresponding to each possible split of the I cells into two clusters. In Fig. 1A, we show the solution branches that arise in the N = 20 system (up to symmetry, that is, although there are four possible 3-1 splittings, we display only one here). Because there are four inhibitory cells, there are two possible splits, 3-1 and 2-2. Both have a branch that originates from the branch point on the origin g * = √ N/αμ E (in Fig. 1B, these are labeled as "3-1" and "2-2 * ", respectively). Along the 2-2 branch, the E cells have zero activity (this is generally the case where the I cells split into two equal clusters). Both branches then have a Hopf bifurcation from which a branch of limit cycles emerges, unstable in the 3-1 case and stable in the 2-2 case. The resulting limit cycle respects the clustering, but the E cell activity is no longer zero in the 2-2 case.
The 2-2 branch has a further branch point, at which the new branch breaks the odd symmetry (i.e. x I 2 = −x I 1 ) and E cell activity moves away from zero. One further branch occurs, in which one of the two cell clusters breaks apart resulting in a 2-1-1 clustering. Why did the 2-1-1 branch come off of the 2-2 branch, rather than of the 3-1 branch? At this time, we have no principled answer. Finally, the origin has a Hopf bifurcation in which the E and I cells separately cluster (we will refer to this as the "E/I" limit cycle).
We next perform the same continuation on the corresponding reduced (fivedimensional: x E and x I 1 − x I 4 ) system. The equilibrium branch structure is shown in Fig. 1C. Up to a permutation of the inhibitory coordinate labels (we do not force the same cell cluster identities to be tracked in both continuations), the curves are identical.
Returning to the full system, we now consider the limit cycles that emerge from the three Hopf bifurcations we identified (on the 3-1 branch, 2-2 branch, and the origin). In Fig. 2A, we plot the period vs. the coupling parameter g. In Fig. 2B, we can see that the 3-1 branch is stable for g 2.8 and the 2-2 branch for g 3. We note that

A Larger System: n I = 10
The real power of the reduced-order model becomes evident when we increase the network size. We now show results for n I = 10: note that, for α = 4, we reduce the dimension of the system from 50 to 11. In Fig. 4A, we show the equilibrium branches found in this system; the same diagram is plotted in the (g, x I 10 ) plane, with labeled curves, in We note that most of these branches are cases where splitting is minimal; that is, a single cluster breaks into two (rather than into three). This confirms our intuition from the equivariant branching lemma, which guarantees the existence of a unique branch of solutions for each subgroup Σ for which the fixed point subspace on the kernel of the Jacobian at the bifurcation point has the right dimension dim Fix(Σ) = 1. (A more general result extends a version of this result to cases of odd dimensions [29].) At the Fig. 4 Equilibrium solution branches in the "x I + 1" system. In this computation, n I = 10; with the chosen parameters, this is equivalent to a full system with N = 50. Two different viewpoints are shown (panels A and B). Markers indicate: Hopf bifurcations (red asterisks); branch points (black triangles); neutral saddles (green crosses) origin, for example, the kernel at the branch point is (n I − 1)-dimensional: v = 0 u , u ∈ R n I , u T 1 = 0. (21) In this case, v = 0 v 1 · · · v 10 , v 1 + · · · + v 10 = 0.
However, this lemma does not exclude the possibility of other solution types, and little is known in general about fixed point subspaces of even dimensions: such solutions have been found in some systems (e.g., [29]), but there is currently no general theory guaranteeing or ruling out such solutions [30]. In this system, at least one branch corresponds to a subgroup Σ for which dim Fix(Σ) = 2, the 2-3-5 branch. We next look at the limit cycles that emerge from Hopf bifurcations on each branch from the origin. Period decreases with g (Fig. 5A). As in the N = 20 system, each branch terminates where it collides with the E/I limit cycle branch that emerges from the Hopf point at the origin (Fig. 5B).

Reduced System: 3-Cluster (x E , x I 1 , x I 2 )
We can gain additional insight into arbitrarily large systems by reducing (1) using the assumption of a three-cluster grouping into populations of n E , n I 1 , and n I 2 , whose activities are given by x E , x I 1 , and x I 2 , respectively. The reduced system iṡ We can also parameterize the clustering with α and β such that n E = α α+1 N , n I 1 = β β+1 1 α+1 N , and n I 2 = 1 β+1 1 α+1 N , that is, β gives the ratio of n I 1 to n I 2 , just as α gives the ratio of n E to n I . Then the equations become (also using the relationship Here, we can treat N , α, and β as continuously varying bifurcation parameters. When N , N α+1 , and N (β+1)(α+1) are all positive integers, the reduced system (23)-(25) lifts onto an N -cell network. Fig. 6 Behavior of three-cluster solutions, for equal-size inhibitory clusters (n I 2 = n I 1 ). A Activity levels on the I 1 /I 2 solution branch. Colors cycle through N = 10, 20, 30, 40, 50, 60, 80, 100, 120, 140,  160, 200, 240, 280, 300, 400, 500, 600, 700, 800, and 1000 (note: n I = N/5 must be a multiple of 2). B Bifurcation values g * , g H , and Hopf frequency ω(g H ) as functions of N

Scaling with System Size
We can use this reduced system to explore how the system behaves as N increases. The system in Eqs. (23)- (25) allows N to be a continuously varying parameter; therefore, we can vary N while holding all other parameters fixed. Notably, we keep β fixed; thus, we track the behavior of a specific partition ratio of inhibitory cells (such as 1-to-1 or 3-to-1) as N increases. When N , N α+1 , and N (β+1)(α+1) are all positive integers, the reduced system lifts onto an N -cell network; at each such N , we can track the I 1 /I 2 fixed point branch from the known bifurcation point g * = √ N/αμ E . In Fig. 6A, we show (x E , x I I , x I 2 ) as a function of g for the partition n I 1 = n I 2 . Colors cycle through N ; for each N , the curves from top to bottom indicate x I 1 , x E , and x I 2 . We can also locate the Hopf bifurcation along this branch at g H and measure the frequency of the periodic solutions that emerge at that point. We plot these quantities in Fig. 6B: we can see they each scale like √ N . In Fig. 7, we show the same quantities computed for two more examples, 1-to-4 and 2-to-3 partitions, respectively: the √ N scaling of both g H and ω(g H ) persists for these different partitions.
The √ N scaling of g * , g H , and ω(g H ) yields insight into the expected behavior of these solutions. First, we should expect these oscillations to become less observable as N increases; g * will eventually reach unphysical values. Second, we should expect the oscillations to become faster as N increases, also eventually reaching an unphysical frequency. Thus, we expect the phenomenon we describe here to be most relevant for small-to-medium N . In the next section, we show that we can easily find an example for N = 200; the oscillation period in that example is comparable to the membrane time constant, which is a reasonable upper bound for frequency.

Demonstration of Relevance to Random Networks ( > 0)
We next demonstrate that the bifurcation structure we have described can explain lowdimensional dynamics in example random networks. We return to Eqs. (1) and (2) but now let > 0. The right-hand side of Eq. (1) can be readily shown to be locally Lipschitz continuous in R N ; thus, solutions will vary continuously as a function of Fig. 7 Behavior of three-cluster solutions as system size N increases. A-B clusters where inhibitory cells break into groups with size ratio 1 to 4 (n I 2 = 4n I 1 ). A Activity levels on the I 1 /I 2 solution branch. Colors cycle through N = 25, 50, 75, 100, 125, 150, 200, 250, 300, 400, 500, 600, 700, 800, and 1000 (n I = N/5 must be a multiple of 5). B Bifurcation values g * , g H , and Hopf frequency ω(g H ) as functions of N . C-D Solution branch in which inhibitory cells break into groups with size ratio 2 to 3 (n I 2 = (3/2)n I 1 ). C Activity levels on the I 1 /I 2 solution branch. Colors cycle through N = 25, 50, 75, 100, 125, 150, 200, 250, 300, 400, 500, 600, 700, 800, and 1000 (n I = N/5 must be a multiple of 5). D Bifurcation values g * , g H , and Hopf frequency ω(g H ) as functions of N parameters (such as ). In particular, we can expect a hyperbolic periodic orbit at = 0 to persist for some range of ∈ [0, 0 ); here we numerically demonstrate this persistence.
We chose parameters μ E = 0.7, σ 2 E = 0.625, and σ 2 I = 2.5. (For = 1, the offdiagonal entries of the resulting random matrices are chosen with the same means and variances as in [11].) We performed a series of simulations in which we fixed A and computed solution trajectories for a range of in between 1 and 0. As decreases, the network connectivity matrix transitions from full heterogeneity (similar to [11]) to the deterministic case studied earlier.
In Figs. 8A-B, we show two examples of random networks of size N = 20 and g = 3. For = 0 (bottom panel), we indeed see a three-cluster solution as predicted. Consistent with our earlier results on limit cycle stability, we see the 3-1 clustering rather than the 2-2 clustering here (in both examples here, n I 1 = 1 and n I 2 = 3). The same periodic solution persists as increases and is still recognizable at = 1, illustrated in the top panel (we note that because of the odd symmetry of the governing equations, −x is also a valid trajectory and appears as a reflection across the time axis). In Fig. 8B, the period of the oscillations discernibly increases with .
In Fig. 8C, we show an example from a larger system with N = 200. Here g = 6; note that a larger coupling value is needed to exceed the bifurcation of the origin at g * = √ N/αμ E . A periodic trajectory is evident in all panels. As in the smaller examples, the period of oscillations increases with . To highlight that this structure is caused by the mean connectivity H, we repeat the sequence of simulations, but integrating the system without the mean matrix H. The results are shown in Fig. 8D: here the same A, initial condition x 0 , values of , and coupling parameter g were used; therefore the only difference between each panel in Fig. 8D vs. its counterpart in Fig. 8C is the absence of the mean connectivity matrix H. Without H, the origin is stable for sufficiently small (for g = 6, 2 < 1/36); hence the zero solutions in the bottom two panels. As 2 increases beyond that value, we see a fixed point, followed by periodic and then apparently chaotic solutions (for 2 > 2 −2 , a decomposition of the trajectories in terms of principal components requires a large number of orthogonal modes, here in excess of 25). In addition, the characteristic timescale is much longer than in Fig. 8C (note the difference in the time axes).
Finally, we can contrast the nonlinear behavior with the predicted linear behavior by examining the spectra of the connectivity weight matrices. In Figs. 8E and 8F, we plot the eigenvalues of (H + A)/ √ N and A/ √ N , respectively, for the specific networks shown in Figs. 8C-D and for several values of . When = 0, the eigenvalues in Fig. 8E cluster into two locations on the real axis, with the exception of one complex pair, as discussed in Example 1. In contrast, the eigenvalues in Fig. 8F all lie at zero for = 0. As increases, the eigenvalues "fan out" from their point locations until they fill a disc of radius g (here, g = 6). At = 1, both matrices have dozens of eigenvalues in the right-half plane.
The similarity in appearance between the spectra illustrated in Figs. 8E and 8F is partially the result network balance (i.e. that α = f 1−f ). The stochastic part of the connectivity matrix G is scaled so that its spectral radius is constant with N ; however, as we noted after Example 1.2, the real part of the eigenvalues of the deterministic matrix H/ √ N scale like O(1/ √ N). Therefore, we expect the stochastic part of the connectivity matrix to dominate the deterministic part as N → ∞ when the network is balanced.

Discussion
In summary, we studied a family of balanced excitatory-inhibitory firing rate networks that satisfy Dale's law for arbitrary network size N . When there is no variance in synaptic connections-each excitatory connection has strength μ E √ N , and each inhibitory connection has strength μ I √ N -we find a family of deterministic solutions whose existence can be inferred from the underlying symmetry structure of the network. These solutions persist in the dynamics of networks with quenched variability-that is, variance in the connection strengths-even when the variance is large enough that the envelope of the spectrum of the connectivity matrix approaches that of a Gaussian matrix. This offers a striking example in which linear stability theory is not useful in predicting transitions between dynamical regimes. Given the increasing interest in network science and, in particular, networked dynamical systems, such observations concerning the impact of symmetry of connectivity can be extremely valuable for studying stability, bifurcations, and reduced-order models.

Role of the Deterministic Perturbation H
Gaussian matrices are a familiar object of study in the random matrix community, where Hermitian random matrices are motivated by questions from quantum physics. Rajan and Abbott [11] studied balanced rank one perturbations of Gaussian matrices and found that the spectrum is largely unchanged. These results have since been extended to more general low-rank perturbations [12,13]. More recently, Ahmadian et al. [15] studied general deterministic perturbations in the context of neural networks. Similarly, recent work has studied extremal values of the spectrum of matrices with modular structure similar to that found here [14]. Our system is not low-rank: in fact, the (seemingly trivial) change in self-coupling makes the deterministic weight matrix full rank, as we see from Lemma 2. Using the procedure developed by Ahmadian et al. [15], we can numerically compute the support of spectrum for > 0 (not shown): as grows, this spectral support appears to approach that predicted by a Gaussian matrix or a low-rank perturbation thereof.
However, the more fundamental issue here is that-except for predicting when the origin becomes unstable-the spectrum of the full connectivity matrix is not particularly informative about nonlinear dynamics. Instead, it is the spectrum of the deterministic perturbation that emerges as crucial here; the location of the eigenvalues of this matrix can be used to predict the existence of a family of steady states and limit cycles with very specific structure. In the examples presented here (Fig. 8), these lowdimensional solutions persist even when is large enough that the spectrum of G is visually indistinguishable from the spectrum of a Gaussian matrix.
It is instructive to compare our findings here with the recent results of del Molino et al. [20], who consider a balanced excitatory-inhibitory system with a similar 1/ √ N scaling of the mean weights. The authors find a slow noise-induced oscillation; similarly to our results here, this oscillation arises despite an unstable connectivity matrix. The two systems differ in the deterministic perturbation: del Molino et al. include self-coupling (their deterministic matrix is rank one), which yields trivial deterministic dynamics without a driving current (in the sense of Example 1.2); thus, they do not see the dynamics described here. Conversely, we do not enforce "perfect balance" j G ij = 0, which they find to be a necessary condition for the slow oscillation to exist; thus we do not see the oscillations described in that paper. Thus, del Molino et al. [20] and the current work present two distinct examples of dynamics that arise in an excitatory-inhibitory system with 1/ √ N scaling of the mean weights, where linear stability of the connectivity matrix is not informative about the nonlinear dynamics.

Relationship to Other Work
The reduced system described in Sect. 4.2 is similar to a simple version of the Wilson-Cowan equations [31,32] (recently reviewed in [33,34]). These equations can be interpreted in terms of coupled neural populations and can be derived as a mean-field limit from large networks. A bifurcation analysis of such a mean-field model was performed recently by Hermann and Touboul [17]. Our system differs in two important ways. First, the strong coupling (1/ √ N ) means that a factor of √ N remains in the reduced equations. Hermann and Touboul, in contrast, pick J ij ∼ N(J N , σ √ N ); therefore the mean connection strength (J N ) goes to zero faster than the typical deviation from this mean ( σ √ N ): as N becomes large, outgoing synapses are no longer single-signed, in violation of Dale's law. Similarly, Kadmon and Sompolinsky [19] analyze random diluted networks; they show the equivalence to allto-all Gaussian networks with nonzero mean connections that scale like (J N ). If the number of synaptic connections per population is held constant, then dynamic meanfield theory yields predictions for stability valid as N → ∞.
In contrast, the reduced system in Sect. 4.2 has no nontrivial limit as N → ∞ and is not necessarily a limit or a system average; rather, it simply gives reduced dynamics in a specific invariant subspace. Ultimately, every solution of the reduced system is also a perfectly accurate solution of the original system. The parameter β allows a single equation to capture arbitrary bisections of the inhibitory population; in principle, adding more equations would allow us further branches to be captured. As another consequence of this scaling, the location of bifurcations g * and g H and the expected frequency of oscillations ω(g H ) will scale like √ N ; arguably, g * and ω(g H ) will reach unphysical levels as N becomes large.
Finally, stronger mean scaling may underlie another difference from previous work; analyzing networks with 1/N scaling, other authors have found populationlevel oscillations via Hopf bifurcations in reduced equations for mean activity [35,36]. However, in those works the oscillations are not necessarily observable at the level of individual cell activity (e.g., Fig. 3 in [36]); here, we have distinct cell-level and population-level oscillations.
Analysis of spontaneous symmetry breaking enjoys a rich history in mathematical biology and, in particular, in mathematical neuroscience. However, the literature we are aware of identifies symmetry-breaking in structured networks dominated by deterministic behavior. For example, symmetry breaking has been hypothesized to underlie the dynamics of visual hallucinations [37] and ambiguous visual percepts [38], central pattern generators that govern rhythmic behaviors of breathing, eating, and swimming [39][40][41], and periodic head/limb motions [42][43][44]. Most recently, Kriener et al. [45] investigate a Dale's law-conforming orientation model and find that the dynamics are affected by a translation symmetry imposed by the regularity of the cell grid. In contrast, the present paper identifies an important role for symmetry in a family of networks usually thought of as dominated by randomness.

Future Directions
In this paper, we have focused on analyzing the deterministic system underlying a family of Dale's law-conforming networks. However, our ultimate interest is in the perturbation away from this system: a full characterization of the dynamics still remains to be completed. Thus far, we have observed more variable behavior in constrained vs. Gaussian networks: at the same coupling parameter g, individual networks display behavior ranging from periodic (as in Fig. 8C) to chaotic, suggesting that this task will be more subtle than for Gaussian networks (also see [20]). Future work will examine this in more detail.
Recent research has focused on the computational power of random networks in the (nominally unpredictable) chaotic regime. Such networks enjoy high computational power because their chaotic dynamics give them access to a rich, complicated phase space, which can be exploited during training to perform complex tasks [21,46]. It is an open question whether the structure of the networks examined here affects their computational performance on tasks that have been previously studied in Gaussian or other random ensembles. One preliminary study has yielded intriguing results [47]: we integrated networks with one of two oscillatory forcing terms I 1 (t) and I 2 (t), as described in [23], and compared the performance of these networks on two computational tasks, encoding network-averaged firing rate with a subpopulation and discriminating the two inputs in phase space. As expected, Gaussian networks performed worse than constrained networks in encoding population firing rates (similarly to what was observed in the balanced networks studied by [23]). However, this difference could not be explained solely by the dimensionality of the solution trajectories (as measured by principal component analysis): constrained networks performed better than Gaussian networks, which required an equal number of principal components to explain their solution trajectories. For the second task, we observed that for constrained networks, the trajectories associated with I 1 and I 2 appeared to cluster in distinct regions of principal component phase space; this clustering was not observed for Gaussian networks.
Finally, the ideas explored here can be applied to more general network symmetries: for example, a network with several excitatory clusters and global inhibition, or several weakly connected balanced networks [48]. This will both introduce realism and allow the exploration of whether there are some universal features that are implied by the broad features of realistic neural network symmetries such as cortexlike excitatory/inhibitory ratios, spatial range specificity of excitatory vs. inhibitory connections, and so forth. We look forward to reporting on this in future work.
This last direction, in particular, promises to provide further insight into the study of stability and bifurcations in reduced-order models. The work in this paper has highlighted how low-dimensional models of high-dimensional networks can be used to understand the underlying bifurcation structures resulting from network connectivity. Such studies are directly relevant to neuroscience, where input-output functionality of extremely high-dimensional networks have been demonstrated to be encoded dynamically in low-dimensional subspaces [49][50][51][52][53][54][55]. We hope that studies such as this can help highlight both methods for characterizing the collective behavior of networked neurons and the limits of traditional mathematical methods in determining stability of such systems. In either case, the results suggest that further study is needed to understand the role of connectivity in driving network-level dynamics.