- Research
- Open Access

# Stable Control of Firing Rate Mean and Variance by Dual Homeostatic Mechanisms

- Jonathan Cannon
^{1}Email authorView ORCID ID profile and - Paul Miller
^{1}

**7**:1

https://doi.org/10.1186/s13408-017-0043-7

© The Author(s) 2017

**Received:**27 May 2016**Accepted:**29 December 2016**Published:**17 January 2017

## Abstract

Homeostatic processes that provide negative feedback to regulate neuronal firing rates are essential for normal brain function. Indeed, multiple parameters of individual neurons, including the scale of afferent synapse strengths and the densities of specific ion channels, have been observed to change on homeostatic time scales to oppose the effects of chronic changes in synaptic input. This raises the question of whether these processes are controlled by a single slow feedback variable or multiple slow variables. A single homeostatic process providing negative feedback to a neuron’s firing rate naturally maintains a stable homeostatic equilibrium with a characteristic mean firing rate; but the conditions under which multiple slow feedbacks produce a stable homeostatic equilibrium have not yet been explored. Here we study a highly general model of homeostatic firing rate control in which two slow variables provide negative feedback to drive a firing rate toward two different target rates. Using dynamical systems techniques, we show that such a control system can be used to stably maintain a neuron’s characteristic firing rate mean and variance in the face of perturbations, and we derive conditions under which this happens. We also derive expressions that clarify the relationship between the homeostatic firing rate targets and the resulting stable firing rate mean and variance. We provide specific examples of neuronal systems that can be effectively regulated by dual homeostasis. One of these examples is a recurrent excitatory network, which a dual feedback system can robustly tune to serve as an integrator.

## Keywords

- Homeostasis
- Dynamical systems
- Stability
- Integrator
- Synaptic scaling
- Averaging

## Mathematics Subject Classification (2000)

- 62P10
- 92C20
- 37C10
- 37C25
- 60H10
- 34F05
- 37E99

## 1 Introduction

Homeostasis, the collection of slow feedback processes by which a living organism counteracts the effects of external perturbations to maintain a viable state, is a topic of great interest to biologists [1, 2]. The brain in particular requires a precise balance of numerous state variables to remain properly operational, so it is no surprise that multiple homeostatic processes have been identified in neural tissues [3]. Some of these processes appear to act at the level of individual neurons to maintain a desirable rate of spiking. When chronic changes in input statistics dramatically lower a neuron’s firing rate, multiple slow changes take place that each act to increase the firing rate again, including the collective scaling of afferent synapses [4, 5] and the adjustment of intrinsic neuronal excitability through adding and removing ion channels [5–7]. These changes suggest the existence of multiple independent slowly-adapting variables that each integrate firing rate over time and provide negative feedback.

Here we undertake an analytical investigation of the dynamics of homeostasis via two independent slow mechanisms (“dual homeostasis”). Our focus on dual homeostasis is partially motivated by the rough breakdown of firing rate homeostatic mechanisms into two categories, synaptic and intrinsic. In our analytical work, we maintain sufficient generality to describe a broad class of firing rate control mechanisms, but we illustrate our results using examples in which homeostasis is governed by one synaptic mechanism acting multiplicatively on neuronal inputs and one intrinsic mechanism acting additively to increase or decrease neuronal excitability. We limit our scope to dual homeostasis to allow us to derive strong analytical results.

It is not immediately clear that dual homeostasis should even be possible. When two variables independently provide negative feedback to drive the same signal toward different targets, one possible outcome is “wind-up” [2], where each variable perpetually ramps up or down in competition with the other to drive the signal toward its own target.

In a recent publication [8], we perform numerical simulations of dual homeostasis (intrinsic and synaptic) in biophysically detailed neurons. We show empirically that this dual homeostasis is stable across a broad swath of parameter space and that it serves to restore not only a characteristic mean firing rate but also a characteristic firing rate variance after perturbations.

Here, we demonstrate analytically that stable homeostasis occurs in a broad family of dual control systems. Further, we find that dual homeostatic control naturally preserves both the mean and the variance of the firing rate, a task impossible for a homeostatic system with a single slow feedback mechanism. We identify broad conditions under which a dually homeostatic neuron possesses a stable homeostatic fixed point, and we derive estimates of the characteristic firing rate mean and variance maintained by homeostasis in terms of homeostatic parameters. We use rate-based neurons and Poisson-spiking neurons for illustration, but our main result is sufficiently general to apply to any model neuron.

One specific application in which such a control system could play an essential role is in tuning a recurrent excitatory network to serve as an integrator. This task is generally considered one that requires biologically implausible precise calibration of multiple parameters [9, 10] and is not well understood (though various solutions to the fine tuning problem have been proposed in [11–14]). In [8], we show empirically that a heterogeneous network of dually homeostatic neurons can tune itself to serve as an integrator. Here, we demonstrate analytically in a simple model of a recurrent excitatory network that integrating behavior can be stabilized by single-cell dual homeostasis and that this stability is robust to the homeostatic parameters of the neurons in the network.

In Sect. 2, we introduce our generalized model of dual homeostasis with the simple but informative example of synaptic/intrinsic firing rate control, and we discuss the reasons that stable homeostatic control is possible for this system. In Sect. 3, we pose a highly general mathematical model of dual homeostatic control. We derive an estimate of the firing rate mean and variance that characterize the fixed points in a given dual homeostatic control system and conditions under which these fixed points are stable. In Sect. 4, we give further specific examples that are encompassed by our general result. In Sect. 5, we use our results to explore dual homeostasis as a strategy for integrator tuning in recurrent excitatory networks. In Sect. 6, we summarize and discuss our results.

## 2 Preliminary Examples

*r*. In this section, we focus on an example in which the two control variables are (1) a factor

*g*describing the collective scaling of the strengths of afferent synapses and (2) the neuron’s “excitability”

*x*, which represents a horizontal shift in the mapping from input current to firing rate. An increase in

*x*can be understood as an increase in excitability (or a decrease in firing threshold) and might be implemented in vivo by a change in ion channel density as suggested in [7]. The choice of

*x*and

*g*as homeostatic control variables is motivated by the observation that synaptic scaling and homeostasis of intrinsic excitability operate concurrently in mammalian cortex [5]. We write this dual control system in the form

*r*is a neuronal firing rate, \(r_{{{x}}}\) and \(r_{g }\) are the “target firing rates” of the two homeostatic mechanisms, \(f_{{{x}}}\) and \(f_{g}\) are increasing functions describing the effect of deviations from the target rates on the two control variables, and \(\tau _{{{x}}}\) and \(\tau_{g}\) are time constants assumed to be long on the time scale of variation of

*r*. An extra factor of

*g*multiplies the second ODE because

*g*acts as a multiplier and must remain nonnegative. As a result,

*g*increases/decreases exponentially (or \(\ln(g)\) increases/decreases linearly) if the firing rate

*r*is below/above the target rate \(r_{g}\). This extra

*g*multiplier is not essential for any of the results we derive here.

*r*may represent the firing rate of any type of model neuron, or a correlate of firing rate such as calcium concentration. Likewise, the target rates \(r_{{x}}\) and \(r_{g}\) may represent firing rates or calcium levels at which the corresponding homeostatic mechanisms equilibrate. We assume that

*r*changes quickly relative to \(\tau_{{{x}}}\) and \(\tau_{g}\) and that

*r*is “distribution-ergodic.” This term is defined precisely in the next section; intuitively, it means that over a sufficiently long time,

*r*behaves like a series of independent samples from a stationary distribution. This allows us to approximate the right-hand sides in (1) by averages over the distributions of \(f_{{x}}(r)\) and \(f_{g}(r)\). We will use \(\langle\cdot\rangle\) to represent the mean of a stationary distribution. Since the dynamics of the firing rate depends on control variables

*x*and

*g*, the distributions we consider here also depend on these variables. Averaging (1) over

*r*, we can write

*g*, and the neuron’s response is additively shifted by the excitability

*x*. Thus, the firing rate is described by the equation

### 2.1 Constant Input

*ϕ*. In this case,

*r*assumes its asymptotic value \(r = g\phi+ {{x}}\) on time scale \(\tau_{r}\) and closely tracks this value as

*g*and

*x*change slowly. Thus, we have \(\langle f_{{{x}}}(r)\rangle= f_{{{x}}}(g\phi +{{x}})\) and \(\langle f_{g }(r)\rangle= f_{g }(g\phi+{{x}})\). To find the

*x*-nullcline (the set of points \(({{x}}, g)\) where \(\dot {{{x}}} = 0\)), we set \(\dot{{{x}}} = 0\) in (2). Since \(f_{{{x}} }\) is increasing, it is invertible over its range, so we find that the

*x*-nullcline in the \(({{x}}, g)\) phase plane consists of the set \(g\phi+{{x}} = r_{{{x}} }\). Similarly, the

*g*-nullcline is the line \(g\phi+{{x}} = r_{g }\), plus the set \(g = 0\). Fixed points of this ODE are precisely the set of intersections of the nullclines. We are interested primarily in fixed points with \(g>0\), so we ignore the set \(g = 0\). Representative vector fields, nullclines, and trajectories for (2) with \(r = g\phi+ {{x}}\) are illustrated in Fig. 1.

From the nullcline equations, it is clear that if \(r_{{{x}} } \neq r_{g }\), there are no fixed points with \(g>0\). If \(r_{g}>r_{{x}}\), *g* increases and *x* decreases without bound (Fig. 1A); if \(r_{g}< r_{{x}}\), then *g* goes to zero (Fig. 1B). Intuitively, this is because the two mechanisms are playing tug-of-war over the firing rate, each ramping up (or down) in a fruitless effort to bring the firing rate to its target. In control theory, this phenomenon is called “wind-up.”

In the (degenerate) case \(r_{{{x}} } = r_{g }\), the nullclines overlap perfectly, forming a line of fixed points (Fig. 1C). This situation is undesirable because it leaves an extra degree of freedom: homeostasis has no unique fixed point, so the neuron could reach a set point with any synaptic strength, including arbitrarily strong or weak synapses, depending on initial conditions. Further, this state is destroyed by any perturbation to the target rates, so it could not be easily sustained in a biological system.

These results might lead us to believe that a control system consisting of two homeostatic control mechanisms cannot drive a neuron’s firing rate toward a single stable set-point. However, we shall find that this is only because we have posed the problem in the context of a perfectly constant input \(I{(t)}\). When \(I{(t)}\) varies, the resulting picture is very different.

### 2.2 Varying Input

*ϕ*, but instead fluctuates randomly around

*ϕ*. One simple example is \(I{(t)} = \phi+ \sigma\xi{(t)}\), where \(\xi{(t)}\) is white noise with unit variance. In this case,

*r*is an Ornstein-Uhlenbeck (OU) process and is described by the stochastic differential equation

An OU process approaches a stationary Gaussian distribution from any initial condition after time \(T \gg\tau_{r}\). In this case, this distribution has mean \(g\phi+{{x}}\) and variance \(\frac{g^{2}\sigma^{2}}{2\tau_{r}}\).

Why does the introduction of variations in \(I{(t)}\) change the situation at all? This is closely connected with the basic insight that the mean value of a function *f* over some distribution of arguments *r*, written \(\langle f(r)\rangle\), may not be the same as the function *f* applied to the mean of the arguments, written \(f(\langle r \rangle )\). The mean value of \(f(r)\) is affected by the spread of the distribution of *r* and by the curvature of *f*. Only linear functions *f* have the property that \(\langle f(r)\rangle= f(\langle r \rangle)\) for all distributions of *r*.

As a consequence, “satisfying” both homeostatic mechanisms may not require the condition \(r_{{{x}} } = \langle r \rangle= r_{g }\). The value of *ẋ* averaged over time may be zero even when the average rate \(\langle r\rangle\) is not exactly \(r_{{{x}} }\), and the average value of *ġ* may be zero when \(\langle r \rangle\) is not exactly \(r_{g }\). The conditions required to satisfy each mechanism depend on the entire distribution of *r*, including the mean \(\langle r \rangle\) and the variance \(\operatorname{var}(r)\). As long as at least one of the homeostatic mechanisms controls \(\operatorname{var}(r)\) and \(\langle r \rangle\), the system has two degrees of freedom and therefore may satisfy the two fixed-point equations nondegenerately, that is, at a single isolated fixed point.

### Example 1

Rate model with linear and quadratic feedback

*x*and

*g*, we have

*r*, we have \(\langle r \rangle= g\phi+ {{x}}\) and \(\langle r^{2} \rangle= \operatorname{var}(r) + \langle r \rangle^{2} = \frac{g^{2}\sigma ^{2}}{2\tau_{r}} + (g\phi+ {{x}})^{2}\), so

The fixed points of this ODE can be studied using basic dynamical systems methods. Setting \(\dot{{{x}}} = \dot{g} = 0\), we find that this equation has fixed points at \(({{x}}^{*}, g^{*}) = (r_{{{x}} }, 0)\) and \(({{x}}^{*}, g^{*}) = (r_{{{x}} } \pm\frac{\phi\sqrt{2\tau_{r}(r_{g }^{2} - r_{{{x}} }^{2})}}{\sigma} , \mp\frac{\sqrt{2\tau_{r} (r_{g }^{2} - r_{{{x}} }^{2})}}{\sigma} )\). We are not interested in the first fixed point because it has \(g^{*}=0\). Of the next two fixed points, we are interested only in the one with nonnegative \(g^{*}\). This fixed point exists with \(g^{*}\neq0\) if and only if the term under the square root is positive, that is, if and only if \(r_{g } > r_{{{x}} }\). It is asymptotically stable (i.e., attracts trajectories from all initial conditions in its neighborhood) if the Jacobian of the ODE at the fixed point has two eigenvalues with negative real part. If one or more eigenvalues have positive real part, then it is asymptotically unstable. At this fixed point, the Jacobian is \(J =\bigl ( {\scriptsize\begin{matrix}{} -\frac{1}{\tau_{{{x}} }} &-\frac{\phi}{\tau_{{{x}}}} \cr -\frac{2r_{{{x}} } g^{*}}{\tau_{g }} & -\frac{(g^{*}\sigma )^{2}}{\tau_{r}\tau_{g }} - \frac{2r_{{{x}} } \phi g^{*}}{\tau_{g }}\end{matrix}}\bigr ) \), and it is easy to check that both eigenvalues have negative real part. We conclude that this (averaged) system possesses a stable homeostatic set-point *if and only if*
\(r_{g }>r_{{{x}} }\). At such a fixed point, the firing rate has mean \(\langle r \rangle= r_{{{x}} }\) and variance \(\langle(r - \langle r \rangle)^{2} \rangle= r_{g }^{2} - r_{{{x}} }^{2}\). Conversely, given any firing rate mean \(\mu ^{*}\) and variance \(\nu^{*}\ge0\), we can choose targets to stabilize the neuron with this firing rate mean and variance by setting \(r_{{{x}} } = \mu^{*}\) and \(r_{g } = \sqrt{\nu^{*} + r_{{{x}} }^{2}}\). Note that this equation is *not* dependent on *ϕ* or *σ*, the parameters of the input \(I(t)\). Thus, if *ϕ* or *σ* changes, then the homeostatic control system will return the neuron to the same characteristic mean and variance.

- 1.
If the mean firing rate is below \(r_{{{x}} }\), then

*g*and*x*increase, both acting to increase the mean firing rate until it is in the neighborhood of \(r_{{{x}} }\). If the mean firing rate is above the targets, then*g*and*x*both decrease to lower the mean firing rate to near \(r_{{{x}} }\). - 2.If
*g*is now small, then the firing rate variance \(\operatorname {var}(r)\) is small, and the second moment \(\langle r^{2} \rangle= \operatorname {var}(r) + \langle r \rangle^{2}\) is close to \(r_{{x}}^{2}\). Once \(\langle r \rangle\) slightly exceeds \(r_{{x}}\), the averaged control system hasbut \(\langle r^{2} \rangle\approx r_{{x}}^{2}\) is still less than \(r_{g }^{2}\), so$$\dot{{{x}}} \approx f_{{x}}(r_{{x}}) - \bigl\langle f_{{x}}(r) \bigr\rangle = r_{{x}} - \langle r \rangle< 0, $$Alternatively, if$$\dot{g} \approx f_{g}(r_{g}) - \bigl\langle f_{g}(r) \bigr\rangle = r_{g }^{2} - \bigl\langle r^{2} \bigr\rangle > 0. $$*g*is large, then \(\operatorname{var}(r)\) is large, so the second moment \(\langle r^{2} \rangle\) exceeds \(r_{g }^{2}\), whereas \(\langle r \rangle\) is still below \(r_{{x}}\),*ġ*is negative, and*ẋ*is positive. - 3.
*g*slowly seeks out the intermediate point between these extremes, where the variance of*r*makes up the difference between \(r_{{{x}} }\) and \(r_{g }\). As it does so,*x*changes in the opposite direction to keep the mean firing rate near \(r_{{{x}} }\).

In Fig. 2B, it is evident that when \(r_{g }< r_{{{x}} }\), no such equilibrium exists. In Fig. 2C, we show that if we exchange \(f_{{x}}\) with \(f_{g}\) and \(\tau_{{x}}\) with \(\tau_{g}\) such that *g* is linearly controlled and *x* is quadratically controlled, then an equilibrium exists for \(r_{g }< r_{{{x}} }\), but it is not stable. These observations suggest that certain general conditions must be met for there to exist a stable equilibrium in a dually homeostatic system. We explore these conditions in the next section.

Note that if only *x* were dynamic but not *g*, the firing rate variance \(\operatorname{var}(r)\) would be fixed at \(\frac{g^{2}\sigma ^{2}}{2\tau_{r}}\), and therefore the variance at equilibrium would be sensitive to changes in *σ*, the variance of the input current. If only *g* were dynamic but not *x*, the firing rate mean and variance could both change over the course of *g*-homeostasis, but the two would be inseparably linked: using the expressions above for firing rate mean and variance, we can see that no matter how *g* changed we would always have \(\langle r \rangle= \frac{\phi\sqrt{2\tau_{r} \operatorname{var}(r)}}{\sigma } + {{x}}\). Thus, the neuron could only maintain a characteristic firing rate mean and variance if they satisfied this constraint.

## 3 Analysis

Now we shall consider the general case in which two control variables *a* and *b* evolve according to arbitrary control functions \(f_{a}\) and \(f_{b}\) and control the distribution of a neuron’s firing rate *r*. We make the dependence of this distribution on *a* and *b* explicit by writing the distribution of *r* as \(P(r ; a,b)\). We address several questions to this model. First, what fixed points exist for a given control system, and what characterizes these fixed points? Second, under what circumstances are these fixed points stable?

In this section, we answer these questions under the simplifying assumption that \(f_{a}''(r)\) and \(f_{b}''(r)\) are constant on any domain where \(P(r ; a,b) > 0\). In Appendices 1 and 2, we show that our results persist qualitatively for nonconstant \(f_{a}''\) and \(f_{b}''\).

In Theorem 1, we write expressions for the firing rate mean \(\mu^{*}\) and variance \(\nu^{*}\) that characterize any fixed point \((a^{*}, b^{*})\). From this result we find that the difference between the two target firing rates plays a key role in establishing the characteristic variance at a control system fixed point.

In Theorem 2, we present a general condition that ensures that a fixed point \((a^{*}, b^{*})\) of the averaged control system is stable. This condition takes the form of a relationship between convexity of the control functions and the derivatives of the first and second moments of \(P(r;a,b)\) with respect to *a* and *b*.

### 3.1 Definitions

*a*and

*b*whose instantaneous rates of change are functions of a firing rate variable

*r*:

*ϵ*representing the separation of the time scales of homeostasis and firing rate dynamics rather than assuming that \(\tau_{a}\) and \(\tau_{b}\) are large. This form is sufficiently general to encompass a wide range of different feedback strategies.

### Remark 1

In order to describe the evolution of a homeostatic variable *a* that acts multiplicatively and must remain positive (e.g., the synaptic scaling multiplier *g* used in many of our examples), we can instead set \(\frac{\tau_{a}}{\epsilon} \dot{a} = a (f_{a}(r_{a}) - f_{a}(r) )\). We can then put this system into the general form above by replacing *a* with \(\tilde{a}:=\log(a)\), whose evolution is described by the ODE \(\frac{\tau_{a}}{\epsilon}\dot{\tilde{a}} = f_{a}(r_{a}) - f_{a}(r)\).

We assume that, for fixed *a* and *b*, the firing rate \(r(t;a,b)\) (written as a function of time and control variables) is distribution-ergodic with stationary firing rate distribution \(P(r;a,b)\), that is, \(\lim_{T\rightarrow\infty} \frac{1}{T}\int_{0}^{T} f(r(t;a,b)) \, dt = \int_{\mathbb{R}} f(r) P(r;a,b) \,dr\) with probability 1 for all integrable functions *f*. For brevity of notation, we let \(\langle\cdot\rangle_{(a,b)} := \mathbb{E}(\cdot| a,b)\) denote the expected value of a function of *r* over the stationary distribution \(P(r;a,b)\) (or, equivalently, the time average of this function over time \(T\rightarrow\infty\)), given a control system state \((a,b)\). Let \(\mu{(a,b)}\) and \(\nu{(a,b)}\) denote the mean and variance of \(P(r;a,b)\), respectively.

We use the averaged equations to study the behavior of the unaveraged system (6). Since *r* may be constantly fluctuating, *a* and *b* may continue to fluctuate even once a fixed point of the averaged system has been reached, so we cannot expect stable fixed points in the classical sense. Instead, we define a weaker form of stability.

We shall call a point \((a^{*}, b^{*})\) “stable in the small-*ϵ* limit” if there exists a continuous increasing function *α* with \(\alpha(0) = 0\) and an \(\epsilon^{*}>0\) such that for all \(0<\epsilon <\epsilon^{*}\), the \((a,b)\) trajectory initialized at \((a^{*}, b^{*})\) remains within a radius-\(\alpha(\epsilon)\) ball centered at \((a^{*}, b^{*})\) for all time with probability 1. Intuitively, a point is stable in the small-*ϵ* limit if trajectories become trapped in a ball around that point, and the ball is smaller when homeostasis is slower.

### Lemma 1

*Any exponentially stable fixed point of the averaged system* (7) *is a stable fixed point of the original system* (6) *in the small*-*ϵ*
*limit*.

### Proof

This follows from Theorem 10.5 in [15]. □

### 3.2 Main Results

#### 3.2.1 Fixed Points

Given a homeostatic control state \((a^{*}, b^{*})\), it is straightforward to find the target firing rates that make that state a fixed point in terms of the average values of \(f_{a}\) and \(f_{b}\). By setting \(\dot{a} = \dot{b} = 0\) in (7) we find that \(r_{a} = f_{a}^{-1} (\langle f_{a}(r)\rangle_{(a^{*},b^{*})} )\) and \(r_{b} = f_{b}^{-1} (\langle f_{b}(r)\rangle_{(a^{*},b^{*})} )\). (These expressions are well defined because \(f_{a}\) is increasing and hence invertible, and \(\langle f_{a}(r)\rangle_{(a^{*},b^{*})}\) must fall within the range of \(f_{a}\); likewise for *b*.)

Given a pair of target firing rates \(r_{a}\) and \(r_{b}\) and functions \(f_{a}\) and \(f_{b}\), we can ask what states \((a^{*}, b^{*})\) become fixed points of the averaged system. We shall answer this question in order to show that (1) when \(f_{a}''\) and \(f_{b}''\) are constant, the fixed points are exactly the points at which \(P(r;a,b)\) attains a certain characteristic mean and variance, (2) the relative convexities of the control functions determine whether \(r_{a}\) or \(r_{b}\) must be larger for fixed points to exist, and (3) fixed points with high firing rate variance are achieved by setting \(r_{b}\) far from \(r_{a}\).

### Theorem 1

*Consider a dual control system as described in Sect*. 3.1

*with target firing rates*\(r_{a}\)

*and*\(r_{b}\)

*and control functions*\(f_{a}\)

*and*\(f_{b}\).

*Let*\(K_{a} := \frac{f_{a}''(r_{a})}{f_{a}'(r_{a})}\), \(K_{b} := \frac {f_{b}''(r_{b})}{f_{b}'(r_{b})}\),

*and*\(k := \frac {K_{a}+K_{b}}{K_{a}-K_{b}-K_{a}K_{b}(r_{b}-r_{a})}\).

*We consider a domain of control system states*\((a,b)\)

*on which each distribution*\(P(r; a, b)\)

*has constant*\(f_{a}''(r)\)

*and*\(f_{b}''(r)\)

*on its support*.

*The fixed points of the averaged control system in this domain are exactly the points*\((a^{*}, b^{*})\)

*at which the mean*\(\mu(a^{*},b^{*})\)

*is*\(\mu^{*}\)

*and the variance*\(\nu(a^{*}, b^{*})\)

*is*\(\nu^{*}\),

*where we define*

We will henceforth call \(\mu^{*}\) and \(\nu^{*}\) the “characteristic” mean and variance of any neuron regulated by this control system.

### Remark 2

Note that this result is *a*/*b* symmetric. If *a* and *b* are swapped, then the signs of \(r_{b} - r_{a}\), \(K_{b} - K_{a}\), and *k* reverse; however, since these terms all occur in pairs, this reversal leaves the expressions for \(\mu^{*}\) and \(\nu^{*}\) unchanged.

### Remark 3

In Appendix 1, we show that this result persists in some sense for nonconstant \(f_{a}''\) and \(f_{b}''\). Specifically, if variation in \(f_{a}''\) and \(f_{b}''\) over the appropriate domain is small, the mean and variance at any fixed point are close to \(\mu^{*}\) and \(\nu^{*}\), and every point at which the mean is \(\mu^{*}\) and the variance is \(\nu^{*}\) is close to a fixed point.

### Proof

*μ*and \(\nu{(a,b)}\) as

*ν*. Since \(f_{a}\) and \(f_{b}\) have constant second derivatives on the domain of interest, we can write

Given the parameters of the control system (including a pair of target firing rates), this theorem shows that achieving a specific firing rate mean and variance is necessary and sufficient for the time-averaged control system to reach a fixed point. If \(P(r;a,b)\) (the distribution of the firing rate as a function of *a* and *b*) changes, as it might as a result of changes in the statistics of neuronal input, then the new fixed points will be the new points at which this firing rate mean and variance are achieved. Conversely, given a desirable firing rate mean and variance, we could tune the parameters of the control system to make these the characteristic mean and variance of the neuron at control system fixed points.

Whether any fixed point \((a^{*}, b^{*})\) actually exists depends on whether the characteristic firing rate mean and variance demanded by Theorem 1 can be achieved by the neuron, that is, fall within the range of \(\mu (a^{*},b^{*})\) and \(\nu(a^{*}, b^{*})\). If the mapping from \((a, b)\) to \((\mu, \nu)\) is not degenerate, then there exists a nondegenerate (two-parameter) set of reachable values of *μ* and *ν* for which control system fixed points exist. In the degenerate case that neither *μ* nor *ν* depend on *b*, the set of reachable values of *μ* and *ν* are a degenerate one-parameter family in a two-dimensional space. This corresponds to the case of a single-mechanism control system. In this case, a control system possesses a fixed point with a given firing rate mean and variance only if they are chosen in a particular relationship to each other. A perturbation to neuronal parameters would displace this one-parameter family in the \((\mu, \nu)\)-space, likely making the preperturbation firing rate mean and variance unrecoverable.

We now prove a corollary giving a simpler form of Theorem 1, which holds if \(r_{b} - r_{a}\) is sufficiently small.

### Corollary 1

*Given*\(K_{a}\)

*and*\(K_{b}\),

*let*\(K = \max( \lvert K_{a} \rvert, \lvert K_{b} \rvert )\).

*If*\(r_{a}\)

*and*\(r_{b}\)

*are chosen such that*\(K \lvert r_{b}-r_{a} \rvert\)

*and*\(\frac{K^{2}}{ \lvert K_{b} - K_{a} \rvert} \lvert r_{b} - r_{a} \rvert\)

*are sufficiently small*,

*then the characteristic mean and variance given in Theorem*1

*are arbitrarily well approximated by*

### Proof

*k*defined in Theorem 1, we can write

*k*is arbitrarily close to \(\frac {K_{a}+K_{b}}{K_{a}-K_{b}}\). This gives us an approximation of \(\mu^{*}\). We can also use it to write

The range of \(r_{b} - r_{a}\) for which this result holds is determined by \(K_{a}\) and \(K_{b}\), measures of the convexities of the control functions. Informally, we say that this corollary holds if \(r_{b} - r_{a}\) is “small on the scale of the convexity of the control functions.”

- 1.
Since a negative firing rate variance can never be achieved by the control system, there can only be a fixed point if \(r_{b} - r_{a}\) and \(K_{b} - K_{a}\) take the same sign.

- 2.
Increasing \(\lvert r_{b} - r_{a} \rvert\) causes a proportionate increase in control system’s characteristic firing rate variance.

#### 3.2.2 Fixed Point Stability

Next, we address the question of whether a fixed point of the averaged control system is stable. We again make the simplifying assumption that \(f''_{a}\) and \(f''_{b}\) are constant and then drop this assumption in Appendix 2.

### Theorem 2

*Let*\((a^{*}, b^{*})\)

*denote a fixed point of the averaged control system described above*.

*We assume the following*:

- 1.
*The functions**μ**and**ν**are differentiable at*\((a^{*}, b^{*})\). - 2.
\(\frac{\partial F_{a}}{\partial a}\)

*and*\(\frac{\partial F_{b}}{\partial b} \)*are negative at*\((a^{*}, b^{*})\),*that is*,*on average*,*a**and**b**provide negative feedback to**r**near*\((a^{*}, b^{*})\). - 3.
*For*\((a,b)\)*in a neighborhood of*\((a^{*}, b^{*})\), \(f''_{a}\)*and*\(f''_{b}\)*are constant on any domain of**r**where*\(P(r; a, b)>0\).

*Let*
\(\mu^{*} = \mu(a^{*}, b^{*})\)
*and*
\(\nu^{*} = \nu(a^{*}, b^{*})\)
*denote the firing rate mean and variance at this fixed point*. *Below*, *all derivatives of*
*μ*
*and*
*ν*
*with respect to*
*a*
*and*
*b*
*are evaluated at*
\((a^{*}, b^{*})\).

*The fixed point*\((a^{*}, b^{*})\)

*of the averaged system is stable if*

### Remark 4

Note that this result is *a*/*b* symmetric: if *a* and *b* are swapped, then the signs of both terms reverse and these sign changes cancel, leaving the stability condition unchanged.

### Proof

*μ*and \(\nu{(a,b)}\) as

*ν*. In order to write useful expressions for the terms in \(\operatorname {det}(J)\), we Taylor-expand \(f_{a}(r)\) about

*μ*out to second order, writing \(f_{a}(r) = f_{a}(\mu) + f'_{a}(\mu)(r - \mu) + \int_{\mu}^{r} (r-s) f_{a}''(s) \,ds\). We similarly expand \(f_{b}(r)\) and average these expressions at \((a,b)\) to rewrite the averaged control equations:

*J*and canceling like terms, we have

### Remark 5

*r*and derive a sufficient condition for stability of the form

*r*.

### Remark 6

A similar result to Theorem 2 could be proven for a system with a single slow homeostatic feedback. The limitation of such a system would be in reachability. As the single homeostatic variable *a* changed, the firing rate mean *μ* and variance *ν* could reach only a one-parameter family of values in the \((\mu, \nu)\)-space. Thus, most mean/variance pairs would be unreachable. A perturbation to neuronal parameters would displace this one-parameter family in the \((\mu, \nu )\)-space, likely making the original mean/variance pair unreachable. Thus, a single homeostatic feedback could only succeed in recovering its original firing rate mean and variance after perturbation in special cases.

By Lemma 1, fixed points of the averaged system that satisfy the criteria for stability under Theorem 2 are stable in the small-*ϵ* limit for the full, un-averaged control system.

## 4 Further Single-Cell Examples

We will focus on examples in which the two homeostatic variables are excitability *x* and synaptic strength *g*, respectively, as discussed before.

The generality of the main results above (which require that \(f''_{a}\) and \(f''_{b}\) be constant) allows us to investigate a range of different models of firing rate dynamics even if we do not have an explicit expression for the rate distribution *P*. We only need to know dependence of the firing rate mean *μ* and variance *ν* on the control variables to use Theorem 1 to identify control system fixed points and to use Theorem 2 to determine whether those fixed points are stable.

In the more general case addressed in Appendix 2, where the second derivatives are not assumed to be constant, the left side of (10) must be positive *and sufficiently large* to guarantee the existence of a stable fixed point, where the lower bound *Δ* for “sufficiently large” is close to zero if \(f''_{a}\) and \(f''_{b}\) are nearly constant over most of the distribution \(P(r;a,b)\). We will further discuss the simpler case, but all of our analysis can be applied to the more general case by replacing “positive” with “sufficiently large.”

### Returning to Example 1

*t*. We assume that

*r*is ergodic. Let \(\mu^{*}\) and \(\nu^{*}\) denote the characteristic firing rate mean and variance determined from the parameters of the control system parameters using Theorem 1.

*ϵ*and by giving \(I{(t)}\) correlations on long time scales, respectively. In both these cases, trajectories fluctuate widely about the fixed point of the averaged system but remain within a bounded neighborhood, consistent with the idea of stability in the small-

*ϵ*limit.

The slopes \(f'_{g }\) and \(f'_{{{x}} }\) can be understood as measures of the strength of the homeostatic response to deviations from the target firing rate, and the second derivatives \(f''_{g }\) and \(f''_{{{x}} }\) can be understood as measures of the asymmetry of this response for upward and downward rate deflections. If *x* and *g* are rescaled to set \(f'_{g }\approx f'_{{{x}} }\), then Theorem 2 predicts that dual homeostasis stabilizes a fixed point with a given characteristic mean and variance if \(f''_{g }(\mu^{*})>f''_{{{x}} }(\mu^{*})\), thats is, if the (signed) difference between the effects of positive rate deflections and negative rate deflections is greater for the synaptic mechanism than for the intrinsic mechanism.

### Example 2

Intrinsic noise

*η*sets the magnitude of the noise. The same calculations as before show that the conditions for stability under Theorem 2 are met at any fixed point for \(\frac{f_{g }''(\mu^{*})}{f_{g }'(\mu^{*})} -\frac {f_{{{x}} }''(\mu^{*})}{f_{{{x}} }'(\mu^{*})}>0\). But now the firing rate variance includes the noise variance: \(\nu{({{x}}, g)} = \frac{g^{2} C}{2\tau_{r}}+\frac{\eta^{2}}{2\tau_{r}}\). Under Theorem 1, a fixed point only exists if control system parameters are chosen to establish a characteristic variance of \(\nu^{*} > \frac{\eta^{2}}{2\tau_{r}}\). This neuron cannot be stabilized with variance less than \(\frac{\eta ^{2}}{2\tau _{r}}\) because a variance that low cannot be achieved by the inherently noisy neuron.

*μ*and

*ν*) and when \(\nu^{*} < \frac {\eta ^{2}}{2\tau_{r}}\) (the necessary variance is not in the range of

*ν*).

### Example 3

Poisson-spiking neuron with calcium-like firing rate sensor

In some biological neurons, firing rate controls homeostasis via intracellular calcium concentration [4]. Intracellular calcium increases at each spike and decays slowly between spikes, and it activates signaling pathways that cause homeostatic changes. Our dual homeostasis framework is general enough to describe such a system. We let *ρ* represent the concentration of some correlate of firing rate, such as intracellular calcium, and use it in place of firing rate *r*. We model neuronal spiking as a Poisson process with rate \(\lambda{(t)} = gI{(t)} + {{x}}\), where \(I{(t)}\) is a stationary synaptic input. We let *ρ* increase instantaneously by *δ* at each spike and decay exponentially with time constant \(\tau _{d}\) between spikes. We assume that *ρ* is ergodic.

We show in Appendix 4 that, after sufficient time, \(y=\rho\) assumes a stationary distribution with mean \(\mu{({{x}}, g)} = \delta\tau _{d}(g\phi + {{x}})\) and variance \(\nu{({{x}}, g)} = \delta^{2}\tau_{d} (Cg^{2} + \frac{g\phi+{{x}}}{2} )\), where *ϕ* is the stationary mean of \(I{(t)}\), and *C* is a positive constant determined by the stationary autocovariance of \(I{(t)}\). Thus, we calculate \(\frac{\partial\nu }{\partial{{x}}} = \frac{\delta^{2}\tau_{d}}{2}\), \(\frac{\partial\mu }{\partial g} = \delta\tau_{d}\phi\), \(\frac{\partial\nu}{\partial g} = 2\delta^{2}\tau_{d}Cg^{*} - \frac{\delta^{2}\tau_{d}\phi}{2}\), and \(\frac {\partial\mu}{\partial{{x}}} = \delta\tau_{d}\), and we find that \(\frac {\partial\nu}{\partial g}\frac{\partial\mu}{\partial{{x}}}-\frac {\partial\nu}{\partial{{x}}}\frac{\partial\mu}{\partial g} = 2\delta ^{3}\tau_{d}^{2}Cg^{*} > 0\). As in Examples 1 and 2, we conclude that the conditions for stability under Theorem 2 are met if \(\frac {f_{g }''(\mu ^{*})}{f_{g }'(\mu^{*})} - \frac{f_{{{x}} }''(\mu^{*})}{f_{{{x}} }'(\mu^{*})}>0\).

Note that the conditions for stability in this model are the same as the conditions in the firing rate models. In [8], we show the same result empirically for biophysically detailed model neurons. What all these models have in common is that changes in *g* significantly affect the firing rate variance in the same direction, whereas *x* controls mainly the firing rate mean and has little or no effect on the variance. These results suggest that \(\frac{f_{g }''(\mu^{*})}{f_{g }'(\mu^{*})} - \frac{f_{{{x}} }''(\mu^{*})}{f_{{{x}} }'(\mu ^{*})}>0\) is a general, model-independent condition for stability of synaptic/intrinsic dual homeostasis. In Appendix 2, where control function second derivatives are not assumed to be constant, this condition is replaced by the condition of sufficiently large \(\frac {f_{g }''(\mu^{*})}{f_{g }'(\mu^{*})} - \frac{f_{{{x}} }''(\mu^{*})}{f_{{{x}} }'(\mu^{*})}\).

As in Example 2, not all mean/variance pairs can be achieved by the control system: no matter how small *g* is, we still have \(\nu{({{x}}, g)} \ge\delta^{2}\tau\frac{g\phi+ {{x}}}{2} = \frac{\delta\mu{({{x}}, g)}}{2}\) due to the inherently noisy nature of Poisson spiking, which acts as a restriction on the range of *ν*. We also must have \(r>0\), so the range of *μ* is constrained to \(\mu{({{x}}, g)}>0\). If \(r_{{x}}\) and \(r_{g}\) are chosen such that the characteristic firing rate mean \(\mu^{*}\) and variance \(\nu^{*}\) defined in Theorem 1 obey these inequalities, then there exists a control system state \(({{x}}^{*}, g^{*})\) at which Theorem 1 is satisfied and which is therefore a fixed point.

## 5 Recurrent Networks and Integration

A recurrent excitatory network has been shown to operate as an integrator when neuronal excitability and connection strength are appropriately tuned [9, 10]. Such a network can maintain a range of different firing rates indefinitely by providing excitatory feedback that perfectly counteracts the natural decay of the population firing rate. When input causes the network to transition from one level of activity to another, the firing rate of the network represents the cumulative effect of this input over history. Thus, the input is “integrated.”

Below, we show that the parameter values that make such a network an integrator can be stably maintained through dual homeostasis as described before. Importantly, we also show that an integrator network made stable by dual homeostasis is robust to variations in control system parameters and (as in the previous examples) unaffected by changes in input mean and variance. In this section, we build intuition for this phenomenon by investigating a simple example network consisting of one self-excitatory firing rate unit, which may be taken to represent the activity of a homogeneous recurrent network. In Appendix 5, we perform similar analysis for *N* rate-model neurons with heterogeneous parameters. In this case, we do not prove stability, but we do demonstrate that if any neuron’s characteristic variance is sufficiently high, then the network is arbitrarily close to an integrator at any fixed point of the control system.

*η*is the level of intrinsic noise, \(I{(t)}\) is a second-order stationary synaptic input with mean

*ϕ*and autocovariance \(R(w)\), and \(\xi{(t)}\) is a white noise process with unit variance. (For simplicity, we have rescaled time to set the time constant \(\tau_{r}\) to 1.) Let \(m{(t)}\) denote the expected value of

*r*at time

*t*. Taking the expected values of both sides of the equation, we have

*μ*denote the expected value of

*r*once it has reached a stationary distribution. Setting \(\dot{m} = 0\), we calculate

*r*from

*m*at time

*t*: \(s{(t)} := r{(t)} - m{(t)}\). From the equations above we have

*s*-dependence drops out of the right side, and we have

*s*acts as a perfect integrator of its noise and its input fluctuations, that is, as a noisy integrator. For

*g*close to 1, the mean-reversion tendency of

*s*is weak, so on short time scales,

*s*acts like a noisy integrator.

*r*. From (18) we write

*C*is a positive constant depending on \(\tau_{r}\) and \(R(w)\) as in the previous section. Let \(\nu:= \lim_{t\rightarrow\infty} \operatorname{var}(r{(t)}) = \lim_{t\rightarrow\infty} \langle s({t})^{2}\rangle\) denote the expected variance of

*r*when it has reached a stationary distribution. At this stationary distribution, we have \(\langle s{(t +dt)}^{2}\rangle= \langle s{(t)}^{2}\rangle\), so

*g*and

*ν*is plotted in Fig. 6. The right side of this equation is \(\frac{\eta^{2}}{2}\) at \(g=0\) and increases with

*g*until it asymptotes to infinity at \(g = 1\). So, given \(\nu^{*} > \frac{\eta^{2}}{2}\), there exists exactly one \(g^{*}\) at which a firing rate variance of \(\nu^{*}\) is achieved. The larger the characteristic variance, the closer \(g^{*}\) will be to unity. As discussed before, the firing rate is a good integrator on short time scales if

*g*is close to unity. So, given a sufficiently large characteristic variance \(\nu^{*}\), the system’s only potentially stable state in the range \(0< g^{*}<1\) will allow it to act as a good integrator on short time scales. The larger \(\nu^{*}\), the more widely \(\nu^{*}\) can be varied while still remaining sufficiently large to make \(g^{*}\) close to unity. So, if target firing rates are chosen to make the characteristic variance \(\nu^{*}\) sufficiently large, then an integrator achieved in this way is robust to variation in characteristic variance \(\nu^{*}\) (and unaffected by variation in \(\mu^{*}\)).

In short, if \(\frac{f_{g }''(\mu^{*})}{f_{g }'(\mu^{*})} - \frac{f_{{{x}} }''(\mu^{*})}{f_{{{x}} }'(\mu^{*})}>0\) and target rates are chosen to create a sufficiently large characteristic variance \(\nu^{*}\), then dual homeostasis of intrinsic excitability and synaptic strength stabilizes a recurrent excitatory network in a state such that the network mean firing rate acts as a near-perfect integrator of inputs shared by the population. The corollary to Theorem 1 tells us that, to first approximation, the characteristic variance is proportionate to the difference between the target rates, and a large characteristic variance is achieved by setting the homeostatic target rates far apart from each other. The integration behavior created in this way is robust to variation in the characteristic mean and variance, and therefore robust to the choice of target firing rates.

This effect can be intuitively understood by noting that as a network gets closer to being a perfect integrator (i.e., as *g* approaches 1), fluctuations in firing rate are reinforced by the resulting fluctuations in excitation. As a result, the tendency to revert to a mean rate grows weaker and the firing rate variance increases toward infinity. (In a perfect integrator, a perturbed firing rate never reverts to a mean, so the variance of the firing rate is effectively infinite.) Thus, the network can attain a large variance by tuning *g* to be sufficiently close to 1. If this large variance is the characteristic variance of the control system, then there is a fixed point of the dual control system at this value of *g*.

In some sense, this behavior is an artifact of the model used—perfect integration is only possible if the feedback perfectly counters the decay of the firing rate over a range of different rates, which is possible in this model because rate increases linearly with feedback and feedback increases linearly with rate. However, such a balance is also achievable with nonlinear rate/feedback relationships if they are locally linear over the relevant range of firing rates. In particular, if the firing rate is a sigmoidal function of input and the eigenvalue of firing rate dynamics near a fixed point is near zero, the upper and lower rate limits act to control runaway firing rates while the system acts as an integrator in the neighborhood of the fixed point. In [8], we show that a recurrent network of biophysically detailed neurons with sigmoidal activation curves can be robustly tuned by dual homeostasis to act as an integrator.

In Appendix 5, we show that integration behavior also occurs at set points in networks of heterogeneous dually homeostatic neurons if one or more of them have a sufficiently large characteristic variance. If only one neuron’s characteristic variance is large, the afferent synapse strength to that neuron grows until that neuron gives itself enough feedback to act as a single-neuron integrator as described before. But if many characteristic variances are large, then all synapse strengths remain biophysically reasonable, and many neurons participate in integration, as might be expected in a true biological integrator network.

## 6 Discussion

This mathematical work is motivated by the observation that the mean firing rates of neurons are restored after chronic changes in input statistics and that this firing rate regulation is mediated by multiple slow biophysical changes [3, 5]. We explore the possibility that these changes represent the action of multiple independent slow negative feedback (“homeostatic”) mechanisms, each with its own constant “target firing rate” at which it reaches equilibrium. Specifically, we focus on a model in which the firing of an unspecified model neuron is regulated by two slow homeostatic feedbacks, which may correspond to afferent synapse strength and intrinsic excitability or any two other neuronal parameters.

In a previous work [8], we showed in numerical simulations that a pair of homeostatic mechanisms regulating a single biophysically detailed neuron can stably maintain a characteristic firing rate mean and variance for that neuron. Here, we have analytically derived mathematical conditions sufficient for any model neuron exhibiting such dual homeostasis to exhibit this behavior. Importantly, the homeostatic system reaches a fixed point when the firing rate mean and variance reach characteristic values determined by homeostasis parameters, so the mean and variance at equilibrium are independent of the details of the neuron model, including stimulus statistics. Thus, this effect can restore a characteristic firing rate mean after major changes in the neuron’s stimulus statistics, as has been observed in vivo, while at the same time restoring a characteristic firing rate variance.

In Theorem 1, we have provided expressions for the characteristic firing rate mean and variance established by a specific set of homeostatic parameters. They show that when the separation between the target rates \(r_{a}\) and \(r_{b}\) is appropriately small, the relative convexities of the functions \(f_{a}\) and \(f_{b}\) (by which the firing rate exerts its influence on the homeostatic variables) determine which target rate must be larger for a fixed point to exist. When a fixed point does exist, the characteristic firing rate variance at the fixed point is proportional to the difference between \(r_{b}\) and \(r_{a}\).

In Theorem 2, we find that any fixed point of our dual homeostatic control system is stable if a specific expression is positive. This expression reflects the mutual influences of firing rate on the homeostatic control system and of control system on the firing rate mean and variance.

Both these theorems are proven under the simplifying assumption that \(f''_{a}\) and \(f''_{b}\) are constant. However, in Appendices 1 and 2, we drop this assumption and find that qualitatively similar results hold as long as these second derivatives do not vary too widely. In particular, stability is guaranteed if the expression in Theorem 2 exceeds a certain positive bound that is close to zero if \(f''_{a}\) and \(f''_{b}\) are nearly constant across most of the range of variation of the firing rate.

We have explored the implications of our results for a system with slow homeostatic regulation of intrinsic neuronal “excitability” *x* (an additive horizontal shift in the firing rate curve) and afferent synapse strength *g*. From the corollary to Theorem 1 we find that (to first approximation) stable firing rate regulation requires \(r_{g }>r_{{{x}}}\). Using Theorem 2, we show that for rate-based neuron models and Poisson-spiking models, stable firing rate regulation is achieved when the \(f_{g }\) is sufficiently concave-up relative to \(f_{{{x}}}\).

We predict that these conditions on relative concavity and relative target firing rates should be met by any neuron with independent intrinsic and synaptic mechanisms as its primary sources of homeostatic regulation. Experimental verification of these conditions would suggest that our analysis accurately describes the interaction of a pair of homeostatic mechanisms; experimental contradiction of these conditions would suggest that the control process regulating the neuron could not be accurately described by two independent homeostatic mechanisms providing simple negative feedback to firing rate.

Our results have special implications for neuronal integrators that maintain a steady firing rate through recurrent excitation. We have found that the precise additive and multiplicative tuning necessary to maintain the delicate balance of excitatory feedback can be performed by a dual control system within the framework we study here if the target firing rates are set far enough apart to achieve a large characteristic firing rate variance. The integrator maintained by dual homeostasis is robust to variation of its target firing rates. This occurs because the circuit is best able to achieve a large firing rate variance when it is tuned to eliminate any bias toward a particular firing rate, exactly the condition necessary for integration. This robust integrator tuning scheme should be considered in the ongoing experimental effort to understand processes of integration in the brain.

## Declarations

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Cannon WB. Physiological regulation of normal states: some tentative postulates concerning biological homeostasis. In: Langley LL, editor. Homeostasis: origins of the concept. Stroudsburg: Dowden, Hutchinson and Ross; 1973. Reprinted from 1926. Google Scholar
- O’Leary T, Wyllie DJA. Neuronal homeostasis: time for a change? J Physiol. 2011;589(20):4811–26. View ArticleGoogle Scholar
- Desai NS. Homeostatic plasticity in the CNS: synaptic and intrinsic forms. J Physiol (Paris). 2004;97:391–402. View ArticleGoogle Scholar
- Turrigiano GG. The self-tuning neuron: synaptic scaling of excitatory synapses. Cell. 2008;135(3):422–35. View ArticleGoogle Scholar
- Maffei A, Turrigiano GG. Multiple modes of network homeostasis in visual cortical layer 2/3. J Neurosci. 2008;28(17):4377–84. View ArticleGoogle Scholar
- Turrigiano G, Abbot LF, Marder E. Activity-dependent changes in the intrinsic properties of cultured neurons. Science. 1994;264(5161):974–7. View ArticleGoogle Scholar
- Desai NS, Rutherford LC, Turrigiano GG. Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nat Neurosci. 1999;2(6):515–20. View ArticleGoogle Scholar
- Cannon J, Miller P. Synaptic and intrinsic homeostasis cooperate to optimize single neuron response properties and tune integrator circuits. J Neurophys. 2016;116(5):2004–22. View ArticleGoogle Scholar
- Cannon SC, Robinson DA, Shamma S. A proposed neural network for the integrator of the oculomotor system. Biol Cybern. 1983;49(2):127–36. View ArticleGoogle Scholar
- Seung HS, Lee DD, Reis BY, Tank DW. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron. 2000;26:259–71. View ArticleGoogle Scholar
- Major G, Baker R, Aksay E, Seung HS, Tank DW. Plasticity and tuning of the time course of analog persistent firing in a neural integrator. Proc Natl Acad Sci USA. 2004;101(20):7745–50. View ArticleGoogle Scholar
- Lim S, Goldman MS. Balanced cortical microcircuitry for maintaining information in working memory. Nat Neurosci. 2013;16(9):1306–14. View ArticleGoogle Scholar
- Goldman MS. Robust persistent neural activity in a model integrator with multiple hysteretic dendrites per neuron. Cereb Cortex. 2003;13(11):1185–95. View ArticleGoogle Scholar
- Koulakov AA, Raghavachari S, Kepecs A, Lisman JE. Model for a robust neural integrator. Nat Neurosci. 2002;5(8):775–82. View ArticleGoogle Scholar
- Khalil HK. Nonlinear systems. 3rd ed. New York: Prentice Hall; 2002. MATHGoogle Scholar
- Cınlar E. Probability and stochastics. New York: Springer; 2011. MATHGoogle Scholar
- Bassan B, Bona E. Moments of stochastic processes governed by Poisson random measures. Comment Math Univ Carol. 1990;31(2):337–43. MathSciNetMATHGoogle Scholar