Spatially extended balanced networks without translationally invariant connectivity

Networks of neurons in the cerebral cortex exhibit a balance between excitation (positive input current) and inhibition (negative input current). Balanced network theory provides a parsimonious mathematical model of this excitatory-inhibitory balance using randomly connected networks of model neurons in which balance is realized as a stable fixed point of network dynamics in the limit of large network size. Balanced network theory reproduces many salient features of cortical network dynamics such as asynchronous-irregular spiking activity. Early studies of balanced networks did not account for the spatial topology of cortical networks. Later works introduced spatial connectivity structure, but were restricted to networks with translationally invariant connectivity structure in which connection probability depends on distance alone and boundaries are assumed to be periodic. Spatial connectivity structure in cortical network does not always satisfy these assumptions. We use the mathematical theory of integral equations to extend the mean-field theory of balanced networks to account for more general dependence of connection probability on the spatial location of pre- and postsynaptic neurons. We compare our mathematical derivations to simulations of large networks of recurrently connected spiking neuron models.


Introduction
Balanced networks [1,2] offer a parsimonious computational and mathematical model of the asynchronous-irregular spiking activity and excitatory-inhibitory balance that are ubiquitous in cortical neuronal networks [3][4][5][6][7][8][9][10]. Balanced networks can produce asynchronous and irregular activity through chaotic or chaos-like spike timing dynamics [2,11]. Mean-field analysis of balanced networks reveals a stable fixed point that naturally produces excitatory-inhibitory balance and weak pairwise spike train correlations without fine tuning of model parameters. In early studies, this mean-field analysis was performed in networks in which connection probabilities are homogeneous across the excitatory and inhibitory populations [1]. Later work extended the mean-field analysis to networks with multiple sub-populations and to spatially extended networks with distancedependent connection probabilities [12][13][14][15][16], including models that combine physical and tuning space [17]. Previous work on spatially extended balanced networks assumed that connection probabilities depended only on the distance between neurons measured with periodic boundaries, rendering connection probabilities translationally invariant. This assumption allows the Fourier modes of network activity to decouple, so the mean-field equations can be easily solved in the Fourier domain. However, connectivity in cortical neuronal networks is not so simple. While the use of periodic boundaries is justified for modeling naturally periodic spaces like orientation tuning space, it is not necessarily realistic for models of physical space. Moreover, connection probabilities in cortical neuronal networks can depend on neuron location in more complicated ways than a pure distance dependence [18][19][20].
We use the mathematical theory of integral equations [21] to extend the mean-field theory of firing rates in balanced networks, permitting a more general dependence of connection probability on the spatial location of pre-and postsynaptic neurons. We derive conditions on the spatial structure connectivity and external input under which networks can maintain balance in the limit of large network size, derive the spatial profile of firing rates in this limit when balance is maintained, and derive a linear approximation to firing rates when balance is broken. We demonstrate our findings with simulations of large spiking networks under a simple spatial connectivity structure that violates the translational invariance of connection probabilities assumed by previous work.

Model and background
We consider recurrent networks of N model neurons, N e = 0.8N of which are excitatory and N i = 0.2N inhibitory. The membrane potential of neuron j in population a = e, i obeys the exponential integrate-and-fire dynamics with the added condition that each time V a j (t) exceeds V th , it is reset to V re , held for a refractory period of τ ref , and a spike is recorded at the threshold crossing time. The synaptic input current to neuron j in population a is given by where t a,j n is the nth spike time of neuron j in population a = e, i and α b (t) = (1/τ b )e -t/τ b H(t) is a postsynaptic current waveform, and H(t) is the Heaviside step function.
We consider a network on the compact domain [0, 1] with neuron j = 1, . . . , N a in population a = e, i located at x = j/N a . Connection strengths are defined by where x = j/N a ∈ Ω is the location of (postsynaptic) neuron j in population a and y = k/N b ∈ Ω is the location of (presynaptic) neuron k in population b. We assume that j ae > 0 and j ai < 0 with all j ab constant with respect to N . The model and mean-field theory are easily extended to the case where connection strength and neuron density are spatially inhomogeneous (see below). The 1/ √ N scaling of synaptic weights is a defining feature of balanced networks which naturally captures excitatory-inhibitory balance and asynchronous-irregular spiking activity observed in cortical recordings [1,2,22]. Recent work in cultured cortical populations shows that similar scaling laws emerge naturally and produce network dynamics consistent with the balanced state [23]. Feedforward external input to the network is modeled by where x ∈ Ω is the location of neuron j in population a. This models the input from O(N) neurons with synaptic weights that scale like O(1/ √ N). Indeed, external input can be replaced by a population of generated spike trains without affecting the mean-field theory [17].
We consider a simple example in which Ω = [0, 1] and where p ab = p ab (x, y) dx dy is the network-averaged connection probability from b to a. Note that we use the convention that p ab , j ab , etc. refer to connection probabilities and strengths from presynaptic population b to postsynaptic population a. Note that, unlike spatial balanced networks considered in previous work [12,17], p ab (x, y) is not translationally invariant and cannot be written in terms of xy. Specifically, neurons near the center of the domain send and receive more connections than neurons near the edges. We first simulated this network with N = 5000 neurons and where F a > 0. Our simulation produced asynchronous, irregular spiking activity ( Fig. 1(A)) and excitatory-inhibitory balance ( Fig. 1(B)) that are characteristic of the balanced network state [1,2] and of cortical neuronal networks [3][4][5][6][7][8][9][10]. Note that the simulations are entirely deterministic once the random network connectivity and initial conditions are specified, so irregular spiking is driven by chaotic or chaos-like dynamics [2,11], not noise. Firing rates were peaked near the center of the domain and decayed toward zero near the edge for various values of N ( Fig. 1(C)-(E)). Firing rates that are peaked at the center of the spatial domain are not unexpected, given the structure of our connectivity kernel and input, and are common across many models of spatially extended networks [12,24,25]. We next derive a mean-field theory for computing firing rates in spatially extended balanced networks like this one.

Mean field theory of balance in spatially extended networks without translational invariance
The mean-field theory is developed by considering the large N limit and defining the mean-field firing rates where r a (x) is the expected value of the firing rates of a neuron in population a at location x ∈ Ω. The mean total input I(x) = [I e (x)I i (x)] T and the feedforward external input F(x) = [F e (x)F i (x)] T are defined analogously. We assume that F a ∈ L 2 (Ω).
The following equation gives a heuristic approximation to neurons' input as a function of firing rates in the network [12,17]: Here, W is an integral operator defined by where is the mean-field connectivity kernel with q e = N e /N = 0.8, and q i = N i /N = 0.2. We assume that w ab ∈ L 2 (Ω 2 ) so that W ab and W are Hilbert-Schmidt integral operators and are, therefore, compact [21]. We further assume that w ab (x, y) = w ab (y, x) so that W ab is a Hermitian integral operator (see Discussion for comments on relaxing some of these assumptions).
Note that W and F do not depend on N , but r and I do depend on N . In the balanced network theory below, we derive expressions for r that must be satisfied for r and I to be finite in the N → ∞ limit (where · is the L 2 norm).
Equation (5) quantifies how firing rates are mapped to mean synaptic input in the network. A closed form approximation in Eq. (5) is possible in this case because the mapping is linear. However, the mapping from synaptic input to firing rates is necessarily nonlinear, depends on the details of the neuron model, can depend on higher moments of the input currents, and is generally not amenable to a closed-form mathematical approximation for the spiking network model considered here. Some studies use a diffusion approximation and Fokker-Planck formalism to account for the dependence of neurons' firing rates on the first two moments of their input currents [26][27][28]. This approach can yield accurate approximations in practice, but makes the assumption that synaptic input is accurately approximated by Gaussian white noise, which may be inaccurate in some settings.
The mean-field theory of balanced networks offers an alternative approach to analyzing firing rates in which the mapping from synaptic input statistics to rates does not need to be known. This theory is developed and applied by analyzing Eq. (5) and the integral equations implied by it, which serves as a heuristic approximation to our spiking network model. We then compare the results of our analysis to simulations of the spiking network model. For balanced network theory, we do not need to specify the exact mapping from I to r. Instead, the only necessary condition for the analysis of balanced networks is that the mapping does not converge to zero or diverge with N , more specifically that as N → ∞, where all orders of magnitude expressions (expressions of the form U ∼ O(F(N))) should be interpreted in the N → ∞ limit under the L 2 norm, so the expression above means lim N→∞ r , I < ∞, where · is the L 2 norm on Ω. From Eq. (5), one can see that satisfying Eq. (8) implies that the network produces large O( √ N), excitatory and inhibitory synaptic currents that cancel or "balance" each other to produce O(1) total synaptic input. Hence, networks that satisfy Eq. (8) condition are said to be "balanced networks" or to operate in a "balanced state. " Remarkably, this condition alone is enough to derive a linear, closed form expression for firing rates in the N → ∞ limit even if the mapping from I to r is unknown. To see this, note that Eqs. (5) and (8) can only be realized when firing rates satisfy Wr + F = 0 (9) in the N → ∞ limit. In other words, if r ∞ = lim N→∞ r exists and is finite, then it must satisfy Wr ∞ + F = 0. Note that W and F do not depend on N . Hence, firing rates in the N → ∞ limit in balanced networks are determined by a linear integral equation of the first kind [21] despite the nonlinearity of the mapping from I to r.
To better understand how this works, consider a commonly used integro-differential equation model for firing rates [24,25] where g is a monotonically increasing function. Any fixed point of this system satisfies Wr + F = g -1 (r)/ √ N . As long as g -1 (r)/ √ N → 0 as N → ∞ at the fixed point, any fixed point approaches the solution to Eq. (9) as N → ∞.
Since Eq. (9) is a Fredholm integral equation of the first kind, it does not generically admit a solution. More specifically, for any integral operator W, there necessarily exist external input profiles F(x), for which there is no solution r(x) to Eq. (9). When this occurs, the network cannot satisfy Eq. (8). Therefore, any spatially extended network can be imbalanced by some external input profiles. Moreover, solutions to Eq. (9) must be nonnegative and stable for the balanced state to be realized (see Discussion).
When Eq. (9) does not admit a solution, a linear approximation provides an approximation to firing rates at finite N [12,17]. This approximation is obtained by making a rectified linear approximation to the mapping from mean input to mean firing rates r a (x) = g a [I a (x)] + for a = e, i, where g a > 0 is the neurons' gain and [·] + denotes the positive part. For spiking network models, g a can be approximated heuristically [12] or by fitting simulation results to a rectified linear function [17]. In the examples considered below, we use the latter approach. When firing rates are positive, this gives the following integral equation for firing rates: where = 1/ √ N is a small, positive number and This equation is a finite N correction to Eq. (9). Since it is an integral equation of the second kind, it generically admits a unique solution for any F (unless is an eigenvalue of W). Therefore, even when Eq. (9) does not admit a solution, Eq. (10) will generally admit a solution. However, the solution to Eq. (10) can diverge as N → ∞ (i.e., → 0), indicating networks for which Eq. (9) does not admit a solution.
A common approach to solving Fredholm integral equations like Eqs. (9) and (10) is to expand the equations using an orthonormal basis of eigenfunctions for the integral operator [21]. However, the integral operator is not guaranteed to have orthogonal eigenfunctions if it is not Hermitian. Because we assume w ab (x, y) = w ab (y, x), the integral operators W ab are Hermitian. However, the integral operator W that comprises our integral equations is not Hermitian because w ei = w ie . Hence, even though W ab have orthogonal eigenfunctions, W does not. We extended the standard theory of integral equations to account for this case in which a non-Hermitian integral operator is composed of multiple Hermitian operators. Our extension is summarized by the following theorem. All convergences in the theorem and proof should be interpreted in an L 2 sense.

Theorem 1
Suppose that W is defined as in Eqs. (6) and (7) where w ab (x, y) = w ab (y, x), w ab ∈ L 2 (Ω 2 ), and all four operators W ab share the same orthonormal basis of eigenfunctions {φ m ∈ L 2 (Ω)} m with associated eigenvalues {μ ab m } m . When = 0 and D -W m is nonsingular for all m, converges to a solution to Eq. (10) where and ·, · is the L 2 inner product on Ω. When the series in Eq. (11) converges at = 0, it converges to a solution to Eq. (9).
Proof All convergences should be interpreted in the L 2 sense. Note that, since w ab (x, y) = w ab (y, x) and w ab ∈ L 2 , W ab is a self-adjoint Hilbert-Schmidt integral operator and is therefore a compact operator; therefore {φ m } m is a complete basis for L 2 (Ω) and μ ab m ∈ R by the spectral theorem.
We first show that the series converges when = 0. Note that μ ab m → 0 as m → ∞ by the completeness of {φ m } so that D -W m → D as m → ∞ when = 0. Therefore, the series in Eq. (11)  Since φ m is a complete basis, this implies that Dr -Wr = F and therefore r satisfies Eq. (10). Now assume that the series in Eq. (11) converges at = 0. Repeating the argument above with = 0 shows that it converges to a solution to Eq. (9) whenever it converges.
Note that the convergence of Eq. (11) when > 0 is implied by the assumptions of the theorem, but the convergence when = 0 needs to be assumed separately. Hence, Eq. (10) admits solutions under relatively general assumptions, whereas the solvability of Eq. (9) is less general and requires that F m converges to zero faster than W m .
While the assumption of symmetric convolution kernels assures that W ab have orthonormal bases of eigenfunctions with real eigenvalues, we additionally assumed that these eigenfunctions were the same for all four combinations of a, b ∈ {e, i}. This would be satisfied, for example, if w ab (x, y) = w ab k(x, y) for some w ab ∈ R, but also applies to other settings. For example, if w ab (x, y) = w ab (xy) are periodic or "wrapped" convolution kernels as in previous work [12,15,17], then the Fourier basis functions provide an orthonormal basis of eigenfunctions even when the four convolution kernels w ab (xy) are not multiples of each other. In this case, Eq. (11) is the Fourier series for r(x). In Discussion, we comment on how some of these assumptions could be weakened.
When W ab do not share orthonormal bases of eigenfunctions, the analysis must be performed directly on the spectrum of W instead of decomposing the analysis into the spectra of each W ab . Specifically, if W, treated as an integral operator on L 2 (Ω) × L 2 (Ω), has an orthonormal basis of eigenfunctions {φ m } m where φ m ∈ L 2 (Ω) × L 2 (Ω) with real eigenvalues {μ m }, then r = m [ Dμ m ] -1 F, φ m φ m . However, note that W may not have an orthonormal basis of eigenfunctions with real eigenvalues even when kernels are spatially symmetric (w ab (x, y) = w ab (y, x)) because a Hermitian W would also require that w ei (x, y) = w ie (y, x), which is generally not true for excitatory-inhibitory networks. Solving Fredholm equations with non-Hermitian integral operators is generally more difficult as diagonalization methods cannot be applied directly. Hence, our derivation of Eq. (11) extends standard diagonalization methods because it solves a Fredholm equation with a non-Hermitian kernel, albeit one with a special structure in which W is composed of multiple Hermitian kernels. This structure arises naturally in spatial neural network models with excitatory and inhibitory populations.
A corollary gives a simpler expression for the solution to Eq. (9) when the spatial shape of the connectivity kernels and external input are the same for excitatory and inhibitory neurons.

Corollary 1
Suppose, in addition to the assumptions of Theorem 1, that w ab (x, y) = w ab k(x, y) and F a (x) = F a F(x) for some w ab , F a ∈ R. Then a solution to Eq. (9) exists and is equal to if the series converges. Here, Note that the first product in Eq. (12) (before the sum) is a vector (it has an excitatory and inhibitory component), but is not a function of x. On the other hand, the second term (the sum) is scalar, but depends on x. Hence, the solution is broken into its spatial and excitatory-inhibitory components.
If K has a nontrivial nullspace, then some of the eigenvalues μ m will be zero. For Eq. (9) to be solvable in this case, F m = F, φ m must be zero for all such m and the corresponding terms in the series in Eq. (12) should be interpreted as zero.
The same separation of terms cannot be applied to solving Eq. (10), i.e., Eq. (11) cannot be simplified in the same way when = 0. However, the evaluation of Eq. (11) when = 0 is simplified to some degree by noting that W m = W μ m and F m = F F m under the assumptions of Corollary 1 and therefore Since all the examples we consider satisfy the assumptions of Corollary 1, we hereafter focus on Eqs. (12) and (13) in place of the more general Eq. (11). The analysis of specific networks with specific external input profiles can proceed as follows: The convergence of the series in Eq. (12) should be checked first. If it does not converge, then the network cannot maintain balance as N → ∞ and Eq. (13) must be used at finite N instead. Even when Eq. (12) does converge, Eq. (13) can be used as an alternative to Eq. (12) for approximating firing rates in spiking network simulations. We next compare Eqs. (12) and (13) to results from large spiking network simulations.

Comparison of mean-field theory to spiking network simulations
We first consider the simulations discussed above and shown in Fig. 1. For this example, we can write w ab (x, y) = w ab k(x, y) where w ab = 12p ab j ab q b and k(x, y) = min(x, y)xy, and we can write F a (x) = F a F(x) with F(x) = sin(πx). Therefore, we can apply Corollary 1. It is easily checked that the integral operator [Kr](x) = Ω k(x, y)r(y) dy has eigenvalues and orthonormal eigenfunctions for m ∈ N. Since F(x) = sin(πx) is itself an eigenfunction, its expansion in the eigenfunction basis has only one nonzero term F 1 = F, φ a = 1/ √ 2 and F m = 0 for m = 1. Therefore, the series in Eq. (12) has only one nonzero term and reduces to which represents the N → ∞ limit of firing rates in the balanced state. The finite N correction in Eq. (13) similarly has only one term and is given by Comparing these expressions to firing rates computed from simulations shows that Eq. (15) provides a noticeably better approximation at N = 1000 ( Fig. 1(C); dashed is closer than dotted to solid), but both equations agree very well at larger values of N ( Fig. 1(D), (E); dashed and dotted are close to solid), as expected. Figure 1(F), (G) shows individual neurons' firing rates versus their time-averaged input currents (dots) and compares them to the rectified linear fit used to estimate g e and g i (solid curves). Note that the rectified linear fit is not highly accurate, especially for excitatory neurons, because the true relationship between I and r is not a rectified linear function. However, this rough approximation is sufficient because g e and g i only affect Eq. (15) at order O( ).
We next consider a more interesting example in which the series in Eqs. (13) and (12) have infinitely many nonzero terms. In particular, consider the same recurrent network with external input Since we have not changed the network, μ m and φ m (x) are the same as above and the only term that changes from above is as the firing rates in the N → ∞ limit in the balanced state. This is only a valid firing rate solution when r(x) > 0 for all x ∈ [0, 1], which requires that c is sufficiently small. Specifically, c must be small enough that (1c) sin(πx) + 2c[cos(4πx) -cos(2πx)] ≥ 0 for all x ∈ [0, 1]. For the finite N correction in Eq. (13), we were unable to obtain a closed form expression for the series, but it can easily be summed numerically using the equations for μ m and F m derived above. Network simulations show asynchronous-irregular spiking activity (Fig. 2(A)) and excitatory-inhibitory balance (Fig. 2(B)). Comparing our theoretical equations to firing rates from simulations shows that Eq. (13) is much more accurate than Eq. (17) even at larger values of N , but the convergence of the two equations to each other (and to results from simulations) is visible when comparing N = 5000 to N = 20,000 ( Fig. 2(C)-(E)).
We finally consider the same recurrent network with external input In this case, Therefore, F m /μ m ∼ m -1 as m → ∞ and the series in Eq. (12) diverges. This implies that the network does not maintain excitatory-inhibitory balance as N → ∞ because Eq. (9) does not admit a solution. Equation (10) still admits a solution for each N , given by Eq. (13), but this solution diverges as N → ∞ ( → 0). Despite the break in balance as N → ∞, network simulations still show asynchronousirregular spiking activity (Fig. 3(A)) and approximate excitatory-inhibitory balance ( Fig. 3(B)) at finite N . Comparing Eq. (13) to firing rates from simulations shows that it is still somewhat accurate for multiple values of N ( Fig. 3(C)-(E)).
Interestingly, the break in balance is not highly apparent in Fig. 3(B), as the currents still appear to be approximately balanced. Our mathematical analysis shows that the mean total input current cannot remain O(1) for all x as N → ∞. However, we expect the divergence to grow like O( √ N) when balance is broken [12], so very large values of N could be necessary for imbalance to become visible.

Discussion
We extended the mean-field theory of firing rates in balanced networks to account for spatial connectivity structures in which connection probabilities depend on the spatial location of pre-and postsynaptic neurons without the translation invariance assumed in previous work. Any such network cannot maintain balance as N → ∞ for every external input profile, and we derived conditions on the external input profile and connection probabilities required for balance to be possible. We also derived a finite N approximation to the firing rates that is applicable even when strict balance cannot be achieved as N → ∞. We compared our theoretical results to large simulations of randomly connected integrate-and-fire neuron models. While the equations left some error at finite N , they captured the overall shape of firing rates at large values of N .
For balance to be realized, the firing rate profiles given by Eq. (12) need to be positive at all values of x ∈ Ω. While we did not derive explicit conditions on this positivity, the solution in Eq. (12) can be checked for positivity and should only be interpreted as a valid solution when rates are positive. When Eq. (12) gives negative rates for some x ∈ Ω, this would lead to mean firing approaching zero as N → ∞ over some spatial locations. It is possible that the remaining neurons in the network that have nonzero firing rates could realize balance separately, forming a balanced sub-network. This possibility, which requires a nonlinear analysis, will be explored in future work.
We parameterized feedforward input as a time-constant, deterministic function F(x). In reality, cortical populations receive noisy, time-varying feedforward input from neurons in other cortical layers, cortical areas, and thalamus. These can be modeled more realistically by generating the spike trains of these presynaptic populations and assigning feedforward connectivity analogous to the recurrent connectivity considered here [15,17,22]. In the mean-field theory, this gives F(x) = [W F r F ](x) where r F (y) quantifies the spatial profile of firing rates in the presynaptic population, which might live on a different space y ∈ Ω y , and W F is the mean-field connectivity kernel defined analogously to W. In this case, Eq. (9) admits a solution, i.e., balance can be achieved, whenever the range of W F is contained within the range of W. See [17] for a development of this idea for networks with translationally invariance connectivity profiles. This provides a way to test the spatial structure of multi-layered cortical circuits for the ability to maintain balance.
In addition, balance requires that the fixed point realized by Eq. (12) is stable. Stability of firing rate fixed points in networks of integrate-and-fire neurons can be very complicated and is outside the scope of this study. Stability for integro-differential equations of the formṙ = -r + g(W + X) can be analyzed more easily, especially when g is assumed to be linear at the fixed point. This approach to stability analysis can provide a heuristic approximation to stability in spiking networks, but instabilities can arise in spiking networks that are not captured by the integro-differential equation due to the inherent resonance of