- Research
- Open Access
- Published:

# Tailoring inputs to achieve maximal neuronal firing

*The Journal of Mathematical Neuroscience*
**volume 1**, Article number: 3 (2011)

## Abstract

We consider the constrained optimization of excitatory synaptic input patterns to maximize spike generation in leaky integrate-and-fire (LIF) and theta model neurons. In the case of discrete input kicks with a fixed total magnitude, optimal input timings and strengths are identified for each model using phase plane arguments. In both cases, optimal features relate to finding an input level at which the drop in input between successive spikes is minimized. A bounded minimizing level always exists in the theta model and may or may not exist in the LIF model, depending on parameter tuning. We also provide analytical formulas to estimate the number of spikes resulting from a given input train. In a second case of continuous inputs of fixed total magnitude, we analyze the tuning of an input shape parameter to maximize the number of spikes occurring in a fixed time interval. Results are obtained using numerical solution of a variational boundary value problem that we derive, as well as analysis, for the theta model and using a combination of simulation and analysis for the LIF model. In particular, consistent with the discrete case, the number of spikes in the theta model rises and then falls again as the input becomes more tightly peaked. Under a similar variation in the LIF case, we numerically show that the number of spikes increases monotonically up to some bound and we analytically constrain the times at which spikes can occur and estimate the bound on the number of spikes fired.

## 1 Introduction

A major component of theoretical neuroscience is the study of how various neuronal models respond to synaptic inputs. Indeed, chemical synaptic transmission offers a specific mechanism for the encoding of information that an organism senses from the external environment, filtered by the internal state of the organism. The functions performed by particular neurons and neuronal networks are in part determined by the nature of the inputs that they receive and are in part a result of the responses they generate to these inputs, due to their intrinsic properties. Thus, understanding neuronal input-output transformations represents a centrally important scientific goal.

Although the framework for incorporating synaptic inputs into computational models is well established, and the computational implications of such inputs have received significant attention, optimization problems involving synaptic inputs are not well represented in the literature. In this paper, we consider such a problem, namely what is the optimal way to tailor synaptic inputs, subject to certain constraints, to maximize the number of spikes that a neuron will fire?

In fact, we consider two variations on this problem, one based on maximizing the total number of spikes fired and one focused on maximal firing within a prescribed time interval. There are several reasons that maximizing numbers of spikes may be a biologically relevant neuronal goal. Since neurons operate under conditions in which efficient resource use could be evolutionarily advantageous, it could be useful if, subject to some constraint on the amount of input that is available, the synaptic input time course could be tailored to achieve the largest possible number of spikes. Certainly, there are brain areas, including areas of visual cortex and somatosensory cortex, where it appears that intensity of firing encodes stimulus information, with neurons showing maximal firing under optimally preferred stimulus conditions [1–3]. Similarly, a sufficiently high firing rate within a given time window may be needed to overcome inhibition or to outcompete activity of other neurons to influence a downstream readout neuron (for example, [4–6] and many more recent works). Which input time courses yield maximal spiking will depend on the intrinsic properties of a neuron or neuronal model, and characterizing optimal input time courses for models may provide information about which neuronal coding functions they are well suited to represent.

In an earlier paper, a calculus of variations approach was used to determine the input current of minimal amplitude that causes a neuron to spike at a specified target time [7]. That work considered a phase model for a spiking neuron, with the evolution of phase x\in [0,2\pi ) given by

where the impact of the input current I(t) is modulated by the phase sensitivity function Z(x). Similarly, earlier work analyzed optimal weak inputs to start or stop repetitive spiking in a general biological oscillator, with dynamics expressed in polar coordinates as

with *ϵ* small [8]. Particular biological systems with dynamics qualitatively equivalent to those of (1) were also considered as examples to illustrate a calculus of variations approach to this problem, involving numerical solution of a system of ordinary differential equations obtained through the introduction of Lagrange multipliers. To our knowledge, ours is the first work to focus on the optimization of inputs, subject to particular constraints, to maximize the number or rate of neuronal output spikes fired.

To initiate research in this direction, we consider input optimization in two well-known mathematical models for single neurons, the leaky integrate-and-fire (LIF) model and the theta model [9–11], the mathematical formulations of which are given in Section 2. Both of these are scalar models, which allows us to use certain analytical approaches, rather than relying entirely on numerics, and we tune both models to be silent in the absence of inputs. One approach that we follow is to treat synaptic inputs as events with discrete onset times, which yields a phase plane representation of the co-evolution of an intrinsic neuronal variable and a synaptic input variable, as introduced previously for the LIF and theta models [12]. This approach would apply equally well in parameter regimes for which the models are intrinsically oscillatory, rather than silent, but we stick with the richer silent case. The LIF model was also considered by Börgers and Kopell, who proved (1) if a collection of phase-shifted, identical time-periodic inputs cause a spike in an LIF neuron, then the same inputs will also cause a spike if they are synchronized, and (2) if a constant input of size *A* causes a spike, then any time-periodic input with time average *A* will also cause a spike [13]. Both of these results show that the power of inputs to induce spikes in the LIF model increases as the input time course is made less uniform. Our results represent an extension of this idea, providing information about specific time courses that are optimal, not just for the generation of a single spike but for maximizing spike outputs. While the LIF model is a reasonable representation for a passive neuron, the theta model can be rigorously derived as a normal form for Type I spiking neurons [9–11], which feature a transition from silence to oscillations through a SNIC bifurcation [14]. Thus, consideration of the theta model allows us to explore how our results extend to a case with additional biological relevance. Interestingly, although both models can fire spikes at arbitrarily low frequencies, differences in their responses to inputs have been noted in previous work [12], which further motivates us to continue to compare them here.

In addition to discrete inputs, we also consider a continuous input formulated so that a particular parameter shapes the input time course and ask how that parameter should be tuned to achieve the maximal number of spikes within a given time window. The absence of a threshold and reset in the theta model is convenient for the application of variational techniques in this case. For the LIF model, we do not apply such techniques, but we nonetheless derive some results about the influence of the input shape parameter on the spike train that results from the input. We shall see that certain properties of each model observed in the discrete case carry over to the continuous case, yet there are some differences worth noting as well.

The remainder of the paper is organized as follows. In Section 2, we analyze the LIF model with discrete synaptic kicks, the cumulative sizes of which are constrained. We introduce phase plane structures that are useful for analysis of the resulting optimization problem and discuss some example strategies for controlling the timing of inputs before moving on to prove some results about these structures and optimal strategies. In particular, we show that two different scenarios are possible, depending on model parameters, and these yield different optimal input strategies. We also show how analytical estimates for the number of spikes fired can be derived for any given input time course. In Section 3, a somewhat similar analysis strategy is applied to the theta model with discrete synaptic kicks. We find that the phase plane structures for the theta model resemble one of the possible cases for the LIF model, with a corresponding similarity in optimal strategies. In Section 4, we turn to the analysis of continuous input for the theta model, seeking a maximal number of spikes in a fixed time window. We define the continuous input so that its shape can be modulated by a parameter yet its integral over all positive time is independent of the value of that parameter. Variational techniques yield a boundary value problem that we solve numerically to find locally and globally optimal parameter values. In Section 5, we consider the continuous input optimization problem for the LIF model, for which the discontinuous reset prevents application of the same variational techniques. We provide direct analysis showing that all spikes must be fired within a particular time interval and characterize the behavior of this interval as the input parameter is varied. Moreover, we prove that the number of spikes saturates as this parameter increases. Some similar analysis for the theta model, showing that spiking is lost as this parameter increases, appears in the Appendix. We conclude with a discussion in Section 6, where we summarize our results and discuss the novelty and relevance of our findings in the neuroscience setting.

## 2 Integrate-and-fire (LIF) model with synaptic kicks

### Model

We consider the dynamics of an LIF model neuron with excitatory synaptic inputs as governed by the equations

together with the reset condition

Equation (2) can be derived from a conductance-based equation C{v}^{\prime}={I}_{\mathit{input}}-{\sum}_{j}{g}_{j}(V-{E}_{j})-{g}_{\mathit{syn}}(V-{E}_{\mathit{syn}}) with fixed intrinsic current conductances {g}_{j}, but we think of it as a nondimensionalized abstract model in which voltage intrinsically converges to a baseline *I* and *E* is the reversal potential of a synaptic input with strength g>0. We assume {v}_{r}<I<{v}_{th}, such that no spikes are fired in the absence of input, and E>{v}_{th}, and we consider the invariant half-plane \{g\ge 0\} within the (v,g) phase plane, where (I,0) is the unique stable critical point. Further, we represent the excitatory input by the equations

for {k}_{n}\in (0,G], n=1,2,\dots, with N\ge 1 finite and G>0 fixed in **R**. Equations (3) and (5) show that each input kick can be chosen to arrive at any time and instantaneously updates the value of *g* when it arrives and that the synaptic conductance *g* always decays exponentially between kicks. Equation (6) states that the sum of all inputs, however they may be divided, is always equal to a fixed input allowance *G*. In the subsequent subsections, we assume that *I*, *E*, {v}_{th}, {v}_{r}, and *β* are fixed and consider how to partition and time the input *G* to yield the greatest number of threshold crossings, or spikes. Without loss of generality, we take {v}_{r}=0. First, we discuss a phase plane representation of this problem and consider some example strategies.

### Phase plane structures and basic strategies

We illustrate some key structures in the phase plane for system (2), (3) in Figure 1. The *v*-nullcline, based on equation (2), is the curve \{g=(v-I)/(E-v)\}. Denote the trajectory through the point {\Gamma}_{0}^{+}:=({v}_{th},{g}_{0}^{+}=({v}_{th}-I)/(E-{v}_{th})) on this curve by Γ_{0}, which is tangent to the line \{v={v}_{th}\}. Γ_{0} partitions the set \{(v,g):v<{v}_{th},g\ge 0\} into a set of initial conditions that yield spikes, namely those above Γ_{0}, and a set of initial conditions that do not yield spikes, namely those below Γ_{0}. Let {\Gamma}_{0}^{-} denote the intersection of Γ_{0} with \{v={v}_{r}\} and let {g}_{0}^{-} denote its *g* coordinate. The point {\Gamma}_{1}^{+}:=({v}_{th},{g}_{0}^{-}) is on the threshold line, and there is a trajectory Γ_{1} that flows through this point. The band between Γ_{0} and Γ_{1}, with {v}_{r}\le v\le {v}_{th}, consists of the set of initial conditions from which trajectories yield exactly one spike. Denote this band by {B}_{1}. Similarly, for each natural number *n*, we can define a curve {\Gamma}_{n}, such that each trajectory with an initial condition in the band {B}_{n} between {\Gamma}_{n-1} and {\Gamma}_{n} yields *n* spikes. Note that this band structure exists if parameters are altered such that I>{v}_{th}, although the minimal band is shifted to negative *g* values, and hence the methods that we discuss easily generalize to this case.

The band structure that characterizes sets of initial conditions that yield particular numbers of spikes calls to mind several natural strategies for doling out input kicks to maximize spike output:

#### Critical kicks

Once a trajectory is below Γ_{0}, it approaches (I,0) asymptotically. Let {\gamma}_{c} denote the *g*-coordinate of the intersection \{v=I\}\cap {\Gamma}_{0}. One possible strategy is to give an initial input of size {\gamma}_{c} as well as subsequent inputs of size {\gamma}_{c} each time the trajectory reaches a sufficiently small neighborhood of (I,0); see Figure 1. This *critical kicks* strategy yields *n* spikes where n{\gamma}_{c}\le G<(n+1){\gamma}_{c}.

#### Big kick

A possible disadvantage of the critical kicks strategy is that the increment {\gamma}_{c} to achieve a spike may exceed the width {\gamma}_{n} between bands {\Gamma}_{n} and {\Gamma}_{n-1} for n\ge 1. To ensure that this increment is only encountered once, taking inspiration from the power of synchronized inputs [12, 13], another reasonable strategy is to give a single *big kick* of size *G*, all at once, to the resting cell (Figure 1).

#### Reset and kick

An input of size {\gamma}_{c} is sufficient to push the voltage across the threshold for single spike initiation. In the big kick strategy, the additional available input G-{\gamma}_{c} is provided together with the {\gamma}_{c}. It is possible that this additional input could deliver more spikes if it were delivered separately from the initial {\gamma}_{c}. We can define a strategy, for example, in which an initial kick of size {\gamma}_{c} is given to elicit a spike. As soon as this spike is fired, the cell is reset and the remaining input allowance G-{\gamma}_{c} is given. Clearly, this *reset and kick* strategy would make sense if the bands {B}_{n} were narrowest at v=0.

#### Threshold kick

At the other extreme, the bands {B}_{n} might be narrowest at v={v}_{th}. In this case, a possible optimal strategy would be the *threshold kick* strategy, defined by giving the initial kick of size {\gamma}_{c} and following this with a kick of size G-{\gamma}_{c} just before threshold crossing occurs (Figure 1).

Intuitively, it is reasonable to think that if *β* is large, such that inputs rapidly decay, then it makes sense to dole out inputs in minimal pieces, such that something like the critical kicks strategy may be optimal. Alternatively, if *β* is small, such that inputs decay slowly, then it makes sense to make inputs available as early as possible, such that one of other strategies is likely to be optimal. To analyze more carefully which strategy is optimal, it will be helpful to define a *band width*
{\delta}_{n}(v) as the distance from {\Gamma}_{n-1} to {\Gamma}_{n} in the *g*-direction for each fixed v\in [0,{v}_{th}]. With this definition in hand, we note that {\delta}_{n+1}({v}_{th})={\delta}_{n}({v}_{r}) for n\ge 1, and hence the reset and kick and threshold kick strategies are effectively the same strategy, yielding the same number of spikes (Figure 1). We also let {\delta}_{\infty}(v)={lim}_{n\to \infty}{\delta}_{n}(v), v\in [0,{v}_{th}], if this limit exists. Very roughly speaking, the critical kicks strategy will yield approximately G/{\gamma}_{c} spikes while the other strategies will induce about G/{\delta}_{\infty}(v) spikes for some *v*, at least if {\delta}_{n}(v) converges to {\delta}_{\infty}(v) quickly. Thus, comparison of {\gamma}_{c} and {\delta}_{\infty} can be used to give an initial suggestion of what strategy to follow.

The value of {\gamma}_{c} can be observed numerically by backwards integration from ({v}_{th},{g}_{0}^{+}) until v=I. Alternatively, it may be optimal to replace {\gamma}_{c} by the distance {\tilde{\gamma}}_{c}:={g}_{0}^{-}-{g}_{0}^{+}, as can be computed by backwards integration from ({v}_{th},{g}_{0}^{+}) up to ({v}_{r},{g}_{0}^{-}), and give kicks of size {\tilde{\gamma}}_{c} after each spike, after the initial kick of size {\gamma}_{c}; we will still refer to this as a critical kicks strategy.

An approximate value of {\delta}_{\infty}(v) can be derived as follows. From (2), (3), the slope *s* of the vector field at any point in the phase plane is given by

For v\in [{v}_{r},{v}_{th}],

The magnitude of the change in *g* over one spike cycle is

By construction, {\delta}_{\infty}({v}_{r})={\delta}_{\infty} as given by (8). But since the value of *g* at reset for the trajectory forming the upper bound of one band is the value of *g* at threshold for the trajectory forming its lower bound, and we have taken the limit as n\to \infty, we can also estimate {\delta}_{\infty}({v}_{th}) using (8) and indeed, using similar translation arguments, we estimate {\delta}_{\infty}(v)={\delta}_{\infty} for all v\in [{v}_{r},{v}_{th}].

Comparison of {\gamma}_{c} (or {\tilde{\gamma}}_{c}) and {\delta}_{\infty} suggests whether or not the critical kicks strategy will elicit more spikes than the other strategies we have described. If not, then we need additional arguments to assess the relative effectiveness of these alternative strategies. In fact, regarding alternative strategies, we have the following result:

**Proposition 1** *The big kick strategy always yields at least as many spikes as the reset and kick* (*and equivalently*, *the threshold kick*) *strategy*.

*Proof* The reset and kick strategy yields m+1 spikes, where *m* is the largest integer such that

Using equation (7), compute

We can see from equation (9) that if v<I, then the slope *s* becomes more negative as *g* is increased, and if v>I, then the slope *s* becomes less negative as *g* is increased. Thus, the bands are narrowest at v=I; that is, {\delta}_{n}(I)<min\{{\delta}_{n}({v}_{r}),{\delta}_{n}({v}_{th})={\delta}_{n+1}({v}_{r})\}. Hence, the big kick strategy, which elicits {m}_{b}+1 spikes for the largest integer {m}_{b} such that

always generates at least as many spikes as reset and kick. □

### Band width estimation

In the previous subsection, we introduced a small number of intuitively reasonable strategies for eliciting the maximum number of spikes from model (2), (3) using a constrained input. We also used a phase plane approach to define a natural band structure, along with a corresponding idea of a band width {\delta}_{n}(v), which we used to show that two of these, the reset and kick and threshold kick strategies, will never be optimal. This structure can also be used to obtain an intuitive idea of which conditions favor a big kick strategy and which conditions favor the critical kicks strategy of giving many small kicks of the same particular size. Next, we use some approximations to derive additional quantitative information about {\delta}_{n} that can be used to determine more globally the optimal input strategy. Henceforth, in addition to assuming that {v}_{r}=0, we for convenience set {v}_{th}=1, with 0<I<1<E.

We can estimate the magnitude \delta (g) of the change in *g* that occurs over one spike cycle using the slope s(v,g) given in equation (7),

where we have approximated *g* by a constant to estimate the integral. Note that for each *n*, the band widths {\delta}_{n}(0)={\delta}_{n+1}(1) from the previous subsection are approximately equal to \delta (g) for certain corresponding choices of *g*; for example, {\delta}_{1}(0)\approx \delta (({g}_{0}^{-}+{g}_{1}^{-})/2), where Γ_{1} intersects \{v=0\} at (0,{g}_{1}^{-}). More generally, it is not necessary to choose a *g* associated with the boundary of a band, as defined from the previous subsection, in order to compute \delta (g).

We can investigate the spikes of a cell by analyzing (10). This approach yields the following result.

**Proposition 2** *If*
E+I-2EI\ge 0, *then*
\delta (g)
*is a monotone decreasing function of g*. *If*
E+I-2EI<0, *then*
\delta (g)
*has a unique local minimum at a finite*, *positive value*
g={g}_{0}.

*Proof* Calculating the derivative of \delta (g) with respect to *g*, we have

where

Furthermore,

Equation (11) shows that E+I-2EI is indeed a key quantity.

Suppose now that E+I-2EI\ge 0. If E+I-2EI\ge 0, then df(g)/dg\ge 0, such that f(g) increases as *g* increases. Define

Since

increases as *E* increases. As

we have {f}_{1}\le 0. Therefore, f(g)\le 0 for all *g* and hence d\delta (g)/dg<0. In another words, under the original approximation used to obtain (10), \delta (g) decreases, and thus there is less change in *g* across each cycle from reset to threshold, as *g* increases.

Next, suppose that E+I-2EI<0. Under this condition, df/dg changes signs, with df/dg>0 for g\in (\frac{1-I}{E-1},\frac{2I(1-I)}{2EI-E-I}) and df(g)/dg<0 for g\in (\frac{2I(1-I)}{2EI-E-I},+\infty ). From expression (12), we have d{f}_{1}/dE\le 0 and {lim\hspace{0.17em}inf}_{E\to +\infty}{f}_{1}=0, such that {f}_{1}\ge 0 for all *E* and f(g)\ge 0 for g\in (\frac{2I(1-I)}{2EI-E-I},+\infty ). Thus, there exists a unique point {g}_{0} such that f(g)\le 0 for all g\in (\frac{1-I}{E-1},{g}_{0}) and f(g)\ge 0 for all g\in ({g}_{0},+\infty ). Correspondingly, \delta (g) decreases when g\in (\frac{1-I}{E-1},{g}_{0}) and increases when g\in ({g}_{0},+\infty ), where {g}_{0} is the zero point of f(g)=0, and the proof is complete. □

In the case of E+I-2EI\ge 0, the monotonicity of \delta (g) suggests that for a trajectory evolving from an initial condition of (v,g)=(0,g(0)) to a final condition (v,g)=(1,g(t)), the drop g(0)-g(t) should be smaller for larger g(0). From Figure 2, we can see that, while the approximation used to derive equation (10) introduces an error in relative to the actual change in *g* computed from direct simulation of trajectories with E+I-2EI\ge 0, the error appears to be small and the monotonicity of \delta (g) appears to be correct. Similarly, a numerically computed example of \delta (g) for parameters that yield E+I-2EI<0 is shown in Figure 3.

### Optimal strategies and spike counts

Based on the previous two subsections, we conclude that if E+I-2EI\ge 0, then the loss of input with each spike, measured by \delta (g), decreases as *g* increases and, furthermore, an input of a fixed size will cross the most spiking bands if it is given at v=I. Hence, of all possible strategies for eliciting spikes with an input of total size *G*, the one that yields the most spikes is what we earlier called the big kick strategy, unless {\gamma}_{c} (or {\tilde{\gamma}}_{c}) is sufficiently small that avoiding the bands altogether by following the critical kicks strategy is optimal. The optimal strategy when E+I-2EI<0 is to provide a kick that puts *g* at approximately {g}_{0}, the *g* value where the minimum of \delta (g) occurs, and then provide as many kicks as possible of size \delta ({g}_{0}), again assuming that {\gamma}_{c},{\tilde{\gamma}}_{c} are above a certain size. We will next perform some additional calculations that can provide estimates of numbers of spikes resulting from any strategy, which can be used with a minimum of calculation to compare the results of particular input sequences.

We first suppose that E+I-2EI<0 and consider a generalization of the optimal strategy described above. That is, we assume that an initial input {G}_{i}\ge {g}_{0} is given and then, once *g* evolves to some neighborhood of {g}_{0}, kicks of size \delta ({g}_{0}) are repeatedly applied until the remaining input falls below \delta ({g}_{0}). We now estimate the numbers of spikes Ω fired for each strategy of this type. Let Ω_{1} denote the number of spikes fired during the initial time period when *g* drops toward {g}_{0}, let Ω_{2} denote the number of spikes fired during the final time period after the available input is depleted, and let Ω_{3} denote the number of spikes fired during the intervening period when repeated kicks of size \delta ({g}_{0}) are given. Clearly,

and \Omega ={\Omega}_{1}+{\Omega}_{2}+{\Omega}_{3}, so it remains to estimate Ω_{1} and Ω_{2}.

Because of the shape of the function \delta (g), the largest \delta (g) during the initial time period will be associated with the first spike fired, while the largest during the final period will be associated with the last spike fired. We can estimate the drop \delta ({G}_{i}) in *g* up to the firing of the first spike from equation (10). To make this estimate relevant to other spikes early in the spike train, we take v(0)=0 rather than v(0)=I. Approximating the level of *g* in equation (10) by {g}_{i}:={G}_{i}-\delta ({G}_{i})/2, we obtain

Next, we estimate \delta (g) for the final spike fired, which we call {\delta}_{f}. To do this, we assume that when the final spike is fired, the trajectory reaches the lower bound on the *g* values that can yield a spike, namely the point of intersection of the *v*-nullcline and \{v=1\}, at which g={g}_{0}^{+}=(1-I)/(E-1). We also use the intermediate value of *g* across the trajectory {\Gamma}_{0}(v), namely (1-I)/(E-1)+{\delta}_{f}/2, as the value of *g* for equation (10), which yields

Now, to obtain an estimated spike count as *g* decays from {G}_{i} to {g}_{0}, we approximate \delta (g) over each spike by the average of its two extreme values, (\delta ({g}_{0})+{\delta}_{i})/2. This approximation yields

Similarly, once the input is used up, spikes continue to be fired as *g* decays from {g}_{0} to approximately (1-I)/(E-1), and the number of additional spikes that result is estimated by

In the above equations, we have taken into account that the trajectory may not be reset precisely at {g}_{0} but rather somewhere within an interval approximated by ({g}_{0}-\delta ({g}_{0})/2,{g}_{0}+\delta ({g}_{0})/2). Because this may lead to an overestimation by one or two spikes, we set {\Omega}_{1}+{\Omega}_{2}={\tilde{\Omega}}_{1}+{\tilde{\Omega}}_{2}-1. The total number of spikes fired is finally estimated by

The calculation can be easily generalized for input patterns that push *g* above and below {g}_{0} multiple times, although they will be non-optimal by our earlier arguments. Similarly, for an initial kick {G}_{i}<{g}_{0} and g<{g}_{0} for all time, the smallest \delta (g) available is \delta ({G}_{i}) and we can estimate the number of spikes resulting from partitioning the input into kicks of size \delta ({G}_{i}) by the equation

If E+I-2EI\ge 0, then the same calculations still apply. The big kick strategy is optimal here, yielding a number of spikes estimated by

Generalizing, a strategy of giving an initial input {G}_{i}, followed by repeated kicks of size {\delta}_{i} until the input is depleted, yields a number of spikes estimated by

Figure 4 shows comparisons of our spike estimates from equations (18) and (21) with numerical computed counts of spikes, illustrating that our estimates can be reasonable.

In fact, in the case of E+I-2EI\ge 0, we underestimate the number of spikes fired for large *G* and {G}_{i}. This underestimation results because we average \delta (G) or {\delta}_{i} with {\delta}_{f} in the denominator of equation (20) or (21), whereas most spikes yield decreases in *g* that are much smaller than {\delta}_{f}. Improved estimates in such cases be obtained by weighting this denominator more toward \delta (G) or {\delta}_{i}, which will decrease the denominator and thus will always yield predictions of additional spikes for larger kicks, relative to the formulas in equations (20), (21). An example resulting from extreme weighting, replacing the average of {\delta}_{i} and {\delta}_{f} with {\delta}_{i} alone, is also shown in Figure 4, as is a similar example for the case of E+I-2EI<0.

In summary, equations of the form (18)-(21), each requiring calculation of only a small number of quantities, can be used on a case by case basis to estimate the numbers of spikes that will result from a given strategy and therefore to compare strategies. These formulas provide for an informed comparison between the two types of big kick strategies determined to be optimal for the two distinct cases of E+I-2EI\ge 0 and E+I-2EI<0, respectively, and the critical kicks strategy. Furthermore, now that we have defined \delta (g), we can give a more precise variation on the calculation of equation (9) made in the subsection on phase plane structures and basic strategies to show that truly optimal strategies (other than the critical kick strategy based on {\tilde{\gamma}}_{c}) provide kicks with v=I, so each strategy should include a time shift so that kicks are given when this condition is met, rather than with v=0 or v=1. Specifically, if {g}_{1}>{g}_{2}, then for {v}_{0}\in [0,1],

Calculating the derivative of the above equation with respect to {v}_{0} yields

a quantity that is positive for {v}_{0}<I and negative for {v}_{0}>I. Hence, the additional input needed to cross bands is minimal for kicks given at {v}_{0}=I, in agreement with Proposition 1. Finally, it is not difficult to see from examination of the above spike counts and equation (10) that, with other parameters fixed, increases in *β* yield fewer spikes, as expected from the corresponding faster decay of *g*, while increases in *E* and *I* yield more spikes, as expected from the increased rate of change of *v*.

## 3 Theta model with synaptic kicks

### Model

We next consider a theta model neuron receiving positive synaptic excitatory kicks, governed by the equations

where b<0 is a parameter. We consider \theta \in [-\pi ,\pi ]mod2\pi, and the neuron is said to fire when *θ* increases through *π* and is effectively reset to −*π*. With g=0, corresponding to the absence of excitatory inputs, and b<0, which is the case we consider, the theta model (22) has two critical points, namely a stable fixed point at {\theta}_{S}=-arccos\frac{1+b}{1-b}<0 and an unstable fixed point at {\theta}_{U}=arccos\frac{1+b}{1-b}>0. Moreover, we assume that the arrival of, and constraints on, synaptic excitation are identical to those for the LIF model, given by (5), (6), with G>0 fixed. Note that, as in the LIF case, everything that we do in this section would still apply if we chose *b* sufficiently large that the model fired spikes in the absence of input, but we stick with the b<0 case to include the additional effects associated with the requirement of a minimal input for spiking to occur.

### Existence of an optimal *g*

We proceed by approximating the amount by which *g* decreases as a trajectory evolves from (-\pi ,{g}_{i}) to (\pi ,{g}_{f}), analogously to our calculations in Section 2. Fixing *g* at some value between {g}_{i} and {g}_{f}, we have

A straightforward calculation yields the following result.

**Proposition 3** *The function*
\delta (g)
*defined in* (23) *has a unique local minimum at*
g=-2b.

*Proof* We calculate

So d\delta (g)/dg<0 when g<-2b, and d\delta (g)/dg>0 when g>-2b. Clearly, \delta (g) has the minimum value 2\beta \pi \sqrt{-b} at g=-2b. □

In Figure 5, we validate the approximation of holding *g* constant in equation (23) by showing that such a minimum exists in direct simulations of the theta model.

### Optimal strategies and spike counts

We will now estimate the number of spikes that will result from a given input allocation strategy. To make effective estimates, it is helpful to estimate the minimal value, call it \stackrel{\u02c6}{g}, such that a trajectory starting from (-\pi ,g) will result in a spike if and only if g>\stackrel{\u02c6}{g}. To do this analytically, we seek \stackrel{\u02c6}{g} such that the trajectory from (-\pi ,\stackrel{\u02c6}{g}) reaches (0,-b) and thus crosses the *θ*-nullcline and converges to {\theta}_{S}; see Figure 6. Although there are other trajectories with initial *g* values above this \stackrel{\u02c6}{g} that also converge to {\theta}_{S}, instead of spiking, by crossing the *θ*-nullcline at points with \theta >0 and g<-b, this approach nonetheless gives a reasonable approximation to the true value of \stackrel{\u02c6}{g}.

We again approximate *g* as a constant over the trajectory of interest, namely \overline{g}=(\stackrel{\u02c6}{g}-b)/2, which is the average of the initial value of *g* and the value of *g* at \theta =0 (Figure 6). This approximation yields

and thus

which provides an implicit estimate for \stackrel{\u02c6}{g}.

Now, if \stackrel{\u02c6}{g}\ge -2b, then all inputs that yield spikes must push the trajectory to *g* values for which d\delta (g)/dg>0 holds. Thus, pushing *g* to progressively larger values yields fewer spikes, and a critical kicks strategy, with initial kick size \stackrel{\u02c6}{g} and subsequent kick sizes \delta (\stackrel{\u02c6}{g}), is optimal, yielding approximately

spikes.

If \stackrel{\u02c6}{g}<-2b, then there is a tradeoff: investing more input initially, up to about size -2b, will push the trajectory to a region where \delta (g) is minimal. On the other hand, if less input is initially invested, then there is more input remaining to give subsequent kicks. We ignore strategies in which an initial input {G}_{i}>\stackrel{\u02c6}{g} is given, some nonzero number of spikes is fired, and then an additional large input spanning multiple spiking bands is given, since these can be seen always to be non-optimal. Suppose first that the initial input has a size {G}_{i}<-2b. Given the shape of \delta (g), the optimal strategy is to expend the remaining input on

kicks of size \delta ({G}_{i}), analogously to the strategy behind equation (24). Which {G}_{i}<-2b is optimal depends on the sizes of \stackrel{\u02c6}{g}, *b*, and \delta (g).

Alternatively, the other possible optimal strategy, if G>-2b, is to take {G}_{i}\ge -2b and to try to maintain *g* values close to g=-2b. For an initial input {G}_{i}>-2b, we estimate the number of spikes for such a strategy using a similar approach to that used in Section 2.4, breaking up the estimate into an initial period of decay of *g*, a period of kicks to keep *g* near -2b, and a final period of spiking until *g* drops below \stackrel{\u02c6}{g}. Once the initial input is given, \delta (g) for the first spike is approximated by a solution to the equation

We will also use an estimate of the \delta (g) for the last spike fired (Figure 6), obtained from the equation

Now, for \stackrel{\u02c6}{g}<-2b, recall that d\delta (g)/dg<0 when g<-2b and d\delta (g)/dg>0 when g>-2b. Thus, during the period when g>-2b, the largest \delta (g) is given by {\delta}_{1}, and while g<-2b, the largest \delta (g) is {\delta}_{2}. The smallest \delta (g) is of course \delta (-2b). Following our earlier strategy of estimating \delta (g) by the average of its largest and smallest values, and approximating *g* over one spike interval by the initial value minus half of the drop in *g* that occurs during that interval, we obtain our estimated spike counts. Specifically, during the initial period of input decay from {G}_{i} to approximately -2b, our estimate is

During the final period of input decay from approximately -2b to approximately \stackrel{\u02c6}{g}, our estimate is

Finally, the number of repeated spikes from the critical kicks of size \delta (-2b) given until the remaining available input G-{G}_{i} is depleted is about

Given that Ω_{1} and Ω_{2} could each overestimate the number of spikes by one, we estimate the total number of spikes with input *G* as

A similar estimate can be obtained from other patterns of inputs.

In summary, we have two candidate optimal strategies when \stackrel{\u02c6}{g}<-2b. Depending on the relative sizes of \stackrel{\u02c6}{g} and -2b and the shape of \delta (g), it may be optimal to choose initial input {G}_{i}<-2b and give repeated kicks of size \delta ({G}_{i}) or it may be optimal to choose {G}_{i}\approx -2b+\delta (-2b)/2 and provide repeated kicks of size \delta (-2b), in both cases repeating the kicks until the input is depleted. Figure 7 shows examples illustrating that the optimal {G}_{i} is indeed very close to -2b.

We can extend this analysis one step further by determining the best value of *θ* at which to deliver the input kicks.

**Proposition 4** *Given an initial condition*
(-\pi ,g)
*and an input of a fixed size*, *the maximal number of spikes subsequent to input delivery is attained when these inputs are given with*
\theta =arccos\frac{1+b}{1-b}.

*Proof* Suppose {g}_{1}>{g}_{2}. We have

Calculating the derivative of the above equation with respect to {\theta}_{0}, we have

As d(\delta ({g}_{1},{\theta}_{0})-\delta ({g}_{2},{\theta}_{0}))/d{\theta}_{0}>0 for {\theta}_{0}\in [-\pi ,-arccos\frac{1+b}{1-b})\cup (arccos\frac{1+b}{1-b},\pi ] and d(\delta ({g}_{1},{v}_{0})-\delta ({g}_{2},{v}_{0}))/d{v}_{0}<0 for {\theta}_{0}\in (-arccos\frac{1+b}{1-b},arccos\frac{1+b}{1-b}), the difference \delta ({g}_{1},{\theta}_{0})-\delta ({g}_{2},{\theta}_{0}) will be maximal at {\theta}_{0}=-arccos\frac{1+b}{1-b} and minimal at {\theta}_{0}=arccos\frac{1+b}{1-b}. Hence, a maximal number of spikes is attained from an input given with \theta =arccos\frac{1+b}{1-b}. □

## 4 Theta model with continuous input

In the analysis we have done so far, the input to the neuron arrives as a series of discrete kicks. An excitatory postsynaptic potential evoked by an individual input may have a more gradual rise, however. We now switch gears and consider how such an individual input, arriving with a continuous time course, can be optimally tailored to evoke a maximal response. This analysis is also relevant to a situation in which a neuron receives input from a very large presynaptic population that fires in near synchrony, but with some spread in recruitment times.

The theta model with a continuous input can be described by the equations

with

where b<0 and A,\beta >0 are parameters. The form of \gamma (t) in equation (28) is often used computationally and has been specifically selected to ensure that its integral over the positive time axis is fixed at *A* for all values of *β*. Unlike the case where the *θ* model received synaptic kicks, we now consider \theta \in \mathbf{R}, with a spike fired whenever *θ* increases through *nπ* for an integer *n*.

The question that we now address is, given a fixed set of intrinsic parameters *b* and *A* and fixed P>0, how should *β* be selected to yield the maximum number of spikes within the time interval [0,P]? Note that when \theta =n\pi, {\theta}^{\prime}>0, so it suffices to find *β* to maximize \theta (P).

### Boundary value problem

We will find optimal values of *β* by numerically solving a boundary value problem. For numerical purposes, it is convenient to map t\in [0,P] to s\in [0,1] using s=t/P. Correspondingly, let \dot{} denote d/ds, such that equation (27) becomes

Next, differentiate \dot{\theta} with respect to *β* to obtain

where {\gamma}_{\beta}(s)=\frac{\partial}{\partial \beta}\gamma (s) is given by

To the *θ* and {\theta}_{\beta} equations, we append the additional equations

Given the system of four equations (29), (30), (32), (33), we need a set of four boundary conditions. First, we set \theta (0)={\theta}_{S}=-arccos((1+b)/(1-b)), so that the model neuron is at a stable rest state when input starts to arrive. Since this specification is independent of *β*, we have {\theta}_{\beta}(0)=0. To find an extreme (with respect to *β*) value of \theta (P), we set {\theta}_{\beta}(P)=0 as well. Finally, we take t(0)=0. We solve this boundary value problem (BVP) numerically, using XPPAUT [15], to obtain the optimum *β* for any fixed *P* *A* *b*.

### Results

If *A* and *P* are fixed, then varying *β* yields different shaped input functions \gamma (t), as illustrated in Figure 8. For fixed *A*, if *β* is sufficiently large, then the input is sufficiently concentrated in time that *P* becomes irrelevant for the number of spikes fired. Alternatively, for smaller *β*, input arrives more gradually and *P* becomes relevant. These relationships are evident in Figure 9A, which shows a curve of solutions to the optimization BVP described in the previous subsection, plotted in *β* *versus* *P* space. Direct numerical simulations show that in fact the same \theta (P) arises for other large *β* as occurs for the *β* value along the upper branch of the curve in Figure 9A, as long as *P* is sufficiently large that this curve is flat (Figure 10). Along the rest of the curve, the *β* values shown truly represent local extrema. In particular, the lower branch of the curve represents local maxima for \theta (P). For *P* values such that the upper branch is flat, the lower branch appears to actually consist of global maxima (Figure 10), while the upper branch represents local minima for sufficiently small *P* that this this branch has curvature (for example, *P* near 2), and for these *P*, there may be additional maxima for \theta (P) at large *β*, not shown in Figure 9A.

For other values of *A*, the situation is similar to Figure 9A; however, some subtleties do emerge. As shown in Figure 9B, additional local extrema of \theta (P) may arise at small *β*, yielding an interval of *P* values with three such extrema at small *β*, bracketed by two fold bifurcations. For example, for A=8 and P=10, there are a local maximum near \beta =0.31, a local minimum near \beta =0.57, and another local maximum near \beta =0.72, with corresponding \theta (t) shown in Figure 11. Note that, as with A=7 as well as other values of *A*, \theta (P) saturates for sufficiently large *β*. The folding structure in Figure 9B arises only for a small interval of *A* values that is difficult to pin down precisely. It is also illustrative to view what happens to the families of BVP solutions as *A* is varied with *P* fixed. Of course, the multiple extrema show up here as well, as evidenced in Figure 12, along with some additional nontrivial dependence of the optimal *β* on *A*. By comparing these bifurcation curves for different *P*, we see that the interval of multiple extrema moves to larger *A* as *P* increases (and *vice versa*). Furthermore, as *A* increases for fixed *P*, \theta (P) at optimal *β* can undergo abrupt increases, associated with the firing of an additional spike (Figure 13); the interesting structure of the BVP solution curves appears to be related to these events.

## 5 LIF model with continuous input

With the continuous input \gamma (t) given in equation (28), we used variational methods to find *β* values that yielded extremal values of \theta (P) for the *θ* model (27), given a final time P>0. Such methods are not available for the LIF model due to the discontinuity imposed by its reset condition. Nonetheless, we can perform some direct analysis of the dependence of firing in the LIF model on the parameter *β*. In particular, although we will not specify an optimal *β*, we will analytically establish some results about how spike times depend on *β*, including the fact that the number of spikes saturates as *β* grows, which is consistent with [13] and contrasts with the non-monotone dependence of \theta (P) on *β* seen in the previous section. Similar methods can be applied to attain analytical results for the theta model, and these are briefly discussed in the Appendix; in particular, these show that spiking is lost when *β* becomes sufficiently large.

To perform this analysis for the LIF model, we make a fairly strong approximation. If a spike is fired at time {T}_{a} and the next spike occurs at time {T}_{b}, then on the time interval ({T}_{a},{T}_{b}], we approximate \gamma (t) by the time average

with

Note that by definition, v({T}_{a}^{+})=0 and v({T}_{b})=1, such that

If we assume that v(0)=0 and fix a positive integer *n*, then we can try to solve equations (34) and (36) with {T}_{a}=0 for a pair of positive numbers that we label ({T}_{1},{\overline{\gamma}}_{1}), such that the solution of equation (35) satisfies v({T}_{1})=1. Similarly, we can set {T}_{a}={T}_{1} and solve for ({T}_{2}>{T}_{1},{\overline{\gamma}}_{2}), and inductively, given ({T}_{j},{\overline{\gamma}}_{j}), we can solve for ({T}_{j+1},{\overline{\gamma}}_{j+1}) until we find {T}_{n+1}>P for some *n*. At that point, we would propose that {T}_{1},\dots ,{T}_{n} are our approximate spike times in the interval [0,P]. In practice, to constrain the solution set of this system of equations, we assume that {T}_{n}=P, since numerical explorations show that, for fixed *A* and *P*, if *β* is tuned to maximize the number of spikes generated on t\in [0,P] by the LIF model with input given by equation (28), the final spike time is indeed generally close to *P*. Thus, for fixed *A*, *β*, *P*, instead of solving iteratively, we attain candidate approximate spike times by simultaneously solving *n* copies of equation (34) and *n* copies of equation (36) for unknowns \{{T}_{1},\dots ,{T}_{n-1},{\overline{\gamma}}_{1},\dots ,{\overline{\gamma}}_{n}\} with a final spike time {T}_{n}=P.

We next study constraints on these approximate spike times \{{T}_{1},\dots ,{T}_{n}\}. Specifically, we have the following result.

**Proposition 5** *For fixed* *A*, *P*, *there exists*
{\beta}_{1}>0
*such that if*
\beta <{\beta}_{1}, *then there are no spikes* (*as defined above*). *If*
\beta >{\beta}_{1}, *then there exist*
0<\underline{t}(\beta )<\overline{t}(\beta )
*such that all spikes lie in*
[\underline{t}(\beta ),\overline{t}(\beta )]. *The function*
\underline{t}(\beta )
*is monotone decreasing in* *β* *and is bounded above by*
1/\beta. *There also exists*
{\beta}_{2}>{\beta}_{1}
*such that the function*
\overline{t}(\beta )
*is monotone increasing for*
\beta \in ({\beta}_{1},{\beta}_{2}), *achieves its maximum at*
\beta ={\beta}_{2}, *and is monotone decreasing with*
\beta \overline{t}(\beta )\ge 2
*for*
\beta >{\beta}_{2}. *The value of*
{\beta}_{2}
*is given by the minimal positive solution of*
\beta \overline{t}(\beta )=2.

*Proof* First, note that spikes can only occur if A{\beta}^{2}t{e}^{-\beta t}\ge \frac{1-I}{E-1}. Clearly, the equation

has ({\beta}_{1},{t}_{1})=(\frac{e(1-I)}{A(E-1)},\frac{A(E-1)}{e(1-I)}), on the line \beta t=1, as one solution. A brief calculation verifies that equation (37) has no solutions for \beta <{\beta}_{1}, while for \beta >{\beta}_{1}, there are two solutions, (\beta ,\underline{t}(\beta )) in \{\beta t<1\} and (\beta ,\overline{t}(\beta )) in \{\beta t>1\}. For fixed *β*, all spikes must lie in the time interval [\underline{t}(\beta ),\overline{t}(\beta )]; note that, in particular, {T}_{1} is bounded below by \underline{t}(\beta ), while our assumption that {T}_{n}\approx P would limit us to parameter choices such that \overline{t}(\beta )\ge P, although this inequality need not be satisfied for this proposition to hold.

Now, consider a solution t(\beta ) of equation (37). Differentiation of (37) yields

while equation (38) itself yields

Equations (38), (39) imply that \underline{t}(\beta ), \beta \underline{t}(\beta ) are monotone decreasing, since

for all *β* for which it is defined. Initially, equation (38) also implies that \overline{t}(\beta ) is increasing, since the curve (\beta ,\overline{t}(\beta )) lies in 1<\beta t<2 for *β* near {\beta}_{1}. However, \beta \overline{t}(\beta ) also increases, by equation (39), until \overline{t} achieves a maximum at {\beta}_{2} such that {\beta}_{2}\overline{t}({\beta}_{2})=2. For \beta >{\beta}_{2}, \overline{t}(\beta ) decreases, by equation (38), but \beta \overline{t}(\beta )\ge 2 for all \beta >{\beta}_{2}, since the curve \beta t=2 has a negative slope in the (\beta ,t) plane and d\overline{t}/d\beta =0 at \beta t=2. □

An example of the solution curves to equation (37), illustrating the results of Proposition 5, appears in Figure 14. The curves of possible spike times {T}_{i}(\beta ) lie between \underline{t}(\beta ) and \overline{t}(\beta ) described above. Of course, if *P* is small, the spiking that can occur in time [0,P] will be limited. With large *P*, so that the full collection of predicted spike times is realized, direct simulations suggest that the number of spikes increases with *β* (for example, Figure 15). While we will not prove that result, we can establish some properties of spike times in the model in the limit of large *β*:

**Proposition 6** *As*
\beta \to \infty,

*where* *n* *is the total number of spikes fired*.

*Proof* The first limit is an immediate consequence of the bound (40). The second limit follows from equation (37). Specifically, some algebra yields

with the left hand side clearly converging to 0 as \beta \to \infty. Moreover, the inequality

yields

so \beta \overline{t}\to \infty as \beta \to \infty.

It remains to establish the fact that number of spikes converges as *β* becomes sufficiently large, as stated in the Proposition and seen in Figure 15. First, note that the difference between successive spike times goes to 0 as \beta \to \infty, since \overline{t}\to 0 in this limit. Thus, the left hand side of equation (36) tends to 0 as \beta \to \infty. Maintenance of the equality in equation (36) therefore requires that

which is consistent with the fact that the maximum of \gamma (t) blows up with *β*.

Combining equations (34) and (36) yields the equality

and equation (41) implies that the limit of the right hand side equals ln(E/(E-1)). Hence, we define a constant {C}_{1} satisfying

A solution of equations (34), (36), with {T}_{0}=0, is {T}_{1}={C}_{1}/\beta, {\overline{\gamma}}_{1}=(A-A{C}_{1}{e}^{-{C}_{1}}-A{e}^{-{C}_{1}})\beta /{C}_{1}. Next, define a constant {C}_{2} satisfying

Addition of equations (42), (43), after rearrangement, yields

and {T}_{1}={C}_{1}/\beta, {T}_{2}=({C}_{1}+{C}_{2})/\beta,

is another solution of equations (34), (36).

Repeating this process, we can find a series of solutions to these equations (see Figure 16), such that for the *j* th spike, {T}_{j-1}=({\sum}_{i=1}^{j-1}{C}_{i})/\beta, {T}_{j}=({\sum}_{i=1}^{j}{C}_{i})/\beta,

and

We denote the total number of spikes by n(\beta ) and estimate the final spike time {T}_{n(\beta )} for any fixed *β* by \overline{t}(\beta ), which gives {\sum}_{i=1}^{n(\beta )}{C}_{i}=\beta \overline{t}(\beta )\to \infty as \beta \to \infty. Thus, from equation (44), it follows that {lim}_{\beta \to \infty}A-n(\beta )ln(E/(E-1))=0, which yields

as desired (Figure 15). □

## 6 Discussion

### Summary and modeling issues

We have considered how certain constrained, positive inputs should be timed to yield maximal numbers of spikes in the LIF and theta models for neurons. In both models, we have considered parameter regimes in which inputs must be above a threshold to elicit a spike. Thus, when each model is subjected to a train of discrete inputs of a fixed total magnitude, it is possible that maximal firing is attained by a critical kick strategy of giving just enough input to push the model trajectory above this threshold and then giving the minimal kick after each spike that achieves threshold clearance again. Aside from this possibility, we have analytically identified which other strategies could possibly be optimal for each model. Defining \delta (g) as the magnitude of the change in input from the level *g* at the firing of one spike to the level g-\delta (g) at the firing of the next spike, we found for the LIF model that our analytical approximation to \delta (g) could be monotone decreasing in *g* or not, depending on the sign of E+I-2EI. In each case, we present an optimal strategy. If E+I-2EI\ge 0, the optimum is a big kick strategy in which all available input is provided immediately. If E+I-2EI<0, if the minimum of \delta (g) occurs at g={g}_{0}, and if the amount of input available exceeds {g}_{0}, then the optimum consists of an initial kick of size \approx {g}_{0}, followed by kicks of size \approx \delta ({g}_{0}) after each spike, until the input is depleted. This dichotomy of possible outcomes may be unexpected in light of standard intuition about the LIF model, which emphasizes the power of big, synchronized input kicks to induce firing. Furthermore, the definition and analysis of \delta (g) is itself novel, and its non-monotonicity in *g* means that there is a different effective dissipation of inputs at different input strengths, defined relative to the time it takes to progress from repolarization after a spike to firing of the next spike. This idea, that the power of a synaptic input depends on more than the individual input’s amplitude, decay rate, and arrival phase, is often neglected in neuroscience studies and is an important observation of our work.

Unlike the LIF model, our approximation to \delta (g) always has a unique local minimum for the theta model, and we can compute its location directly. Assuming that this minimum corresponds to an initial condition that really does result in a spike, there is a clear optimal strategy of providing critical kicks to keep *g* near the minimum of \delta (g), as in the LIF case with E+I-2EI<0. The existence of this minimum likely is related to the presence of an internal peak in the phase response curve (PRC) for the theta model, which past authors have suggested would represent an optimal phase for input timing [16], although PRC theory applies to oscillators receiving weak inputs, while we consider strong inputs in the excitable regime. Heuristically, this minimum may arise because of the shape of the *θ*-nullcline in the (\theta ,g) plane. If *g* is on the small end of the spiking range, then the progress of *θ* towards spike threshold is slowed by the proximity of the trajectory to the *θ*-nullcline, allowing *g* to drop significantly from one spike to the next. If this idea is correct, then we expect non-monotone \delta (g) to occur for models where voltage can be significantly slowed between spikes, without preventing the firing of the next spike, when *g* is on the lower end of the spiking range. In summary, for both the LIF and theta models with discrete inputs, the maximal number of spikes is attained for one of two strategies, either a critical kick strategy based on a minimal input threshold or a second big kick or critical kick strategy that we have identified from analysis of \delta (g). The latter applies equally well in the case when the models are intrinsically oscillatory in the absence of inputs.

We also show how to estimate analytically the number of spikes resulting from any input time course in both models, obtaining results that compare well with direct numerical simulations. Further, we establish an optimum value for the dependent variable of each model, *v* and *θ* respectively, where each input should be delivered in a critical kick strategy to achieve maximal subsequent spike output. In both cases, this value is a stable critical point for the intrinsic dynamics of the model.

When the input to each model is continuous, the optimization problem we consider is how to adjust the shape of the input, using a parameter *β* that does not affect the input’s overall magnitude, to achieve maximal spiking in a fixed time interval [0,P]. Derivation and solution of a BVP yields values of *β* eliciting extremal numbers of spikes for the theta model, and numerical simulations show whether these extrema are maxima or minima. We find that the number of spikes fired increases and then decreases again as *β* increases, corresponding to faster rise and decay of the input function and a larger maximal input. The existence of an interior local optimum for *β* is consistent with the non-monotonicity of \delta (g) for the theta model in the discrete input case, with both observations pointing out that delivery of input in a stronger, faster way may reduce the resulting number of spikes for the theta model. Eventually, the number of spikes saturates, remaining invariant under additional increases in *β*.

The LIF model with a continuous input shares this saturation property, as we prove in Proposition 6. For the LIF model, however, unlike the theta model, the number of spikes increases monotonically with *β*, and hence with the degree to which the input is concentrated in time, based on numerical simulations. This finding is consistent with previous analysis [12, 13] showing that synchronous input to the LIF model yields more spiking than inputs that are spread out. Indeed, in light of this set of results, it is interesting that we do not always find this behavior for the LIF model with discrete input kicks, in the case where E+I-2EI<0. This disparity in results for the LIF model points out that details of how synaptic inputs are modeled can influence model dynamics in significant ways. The specific differences between optimal input patterns for the LIF and theta models that we highlight represent novel findings about the relationship between these models, while other differences have been demonstrated in earlier work [12]. Both of these models have Type I behavior [10, 17], including the ability to fire spikes at arbitrarily low frequencies, yet clearly there are differences in their dynamic properties, including their responses to inputs. Such subtleties point out that classifications of models and neurons into gross categories, such as integrators and resonators, often need additional refinements to capture the diversity of neuron dynamics.

Analytically, we found an interval of times during which all spikes must occur for the LIF model with continuous inputs, and we characterized the interesting dependence of the endpoints of this interval on *β*. The results of applying similar methods to the theta model are also discussed in the Appendix. As *β* increases, all spike times converge to 0, yet the input becomes strong enough to elicit increasingly more spikes (up to some level). We were not able to exploit our approach to prove that the number of spikes increases monotonically with *β*, however. More specifically, we identified a minimal value {\beta}_{1} such that \beta >{\beta}_{1} is necessary for spikes to occur. One idea for proving monotonicity would be to seek a sequence of values {\beta}_{1}<{\beta}_{2}<{\beta}_{3}<\cdots such that \beta >{\beta}_{n} is required for the firing *n* spikes (note that this {\beta}_{2} differs from the {\beta}_{2} used in Section 5). However, we were able to derive an equation for {\beta}_{1} because we have an analytical formula for a minimal level of input needed for at least one spike to occur, and we do not have such an expression for subsequent spikes. Nonetheless, it is possible that the band structure established in Section 2 could be exploited to abstractly establish results in this direction that might lead to a proof of monotonicity.

The overall utility of the techniques presented in this paper depends in part on their generalizability. Note that the models we consider have relatively few parameters, and our results do not depend on particular parameter choices as long as they render the neuron excitable. For our numerical examples, we have chosen various parameter combinations to try to illustrate different parameter regimes. In some discrete input cases, we did choose *β* values that appear to be too small to represent fast AMPA-mediated synaptic excitation, because certain differences in outcomes across strategies are most clearly evident with small *β*. It is important to keep in mind, however, that our models are dimensionless, that there are also slower NMDA-mediated excitatory synaptic currents, and that what we have represented as a slow synaptic decay could also arise from a long membrane time constant in a postsynaptic neuron or from the gradual arrival of many inputs from a presynaptic neuron population with a slowly down-ramping firing rate. Aside from parameter variations, our methods for estimating numbers of spikes are also quite general across different discrete input patterns, for the models we consider. For these models, our methods could likely be adapted to other optimization problems, such as tuning inputs to achieve regularity of interspike intervals, spiking within some range of rates, or spiking at particular times [7] or finding minimal inputs to generate particular spike patterns [8]. In each of these problems, we would obtain predictions about which features of the timing and size of inputs within a discrete input train yield which spiking patterns.

We have assumed that the neurons receiving inputs are silent in the absent of inputs. As noted above, our techniques for the discrete input case would generalize to models with nonzero intrinsic firing rates, and in particular the band structure partitioning phase space into initial conditions corresponding to different numbers of spikes fired before inputs are depleted would carry over analogously. Indeed, the existence of a minimal level of input needed for spiking becomes irrelevant during time periods in our analysis when the input is well above that level, such as during the application of critical kicks based on a local minimum of \delta (g). The presence of intrinsic oscillations complicates the case of continuous inputs, because the impact of an input will depend on the phase at which it starts, as in recent phase resetting analysis for bursting models [18, 19]. Furthermore, our results about constraints on input times during which spiking can occur in the LIF model with continuous inputs clearly depend on the absence of intrinsic firing.

Unlike the models that we have considered, many other neuronal models have two- or higher-dimensional intrinsic dynamics. Reductions into fast and slow subsystems (for example, [20, 21]) or other reductions [22] offer the possibility of extracting subsystems from these models on which a similar analysis to ours, including the partitioning of an appropriate phase space into bands associated with certain spike numbers in the discrete case, could be performed. It is also possible that results could be obtained after reduction to a firing time map (for example, [23]). Moreover, analytical techniques can be used to reduce general oscillator models of arbitrary dimension, subject to weak inputs of a prescribed time course, down to forced scalar phase equations [24]. Optimization techniques could be applied to tailor the forcing terms in such equations, within certain constraints, but of course this option is only available if the intrinsic neuronal dynamics is oscillatory and the inputs are weak. Direct simulations can also be done to begin to examine how \delta (g) varies with *g* for particular higher-dimension models. Some preliminary simulations yielded a monotone \delta (g) for a few conductance-based models (for example, [25–27]) in certain parameter regimes, as well as a non-monotone \delta (g) for a particular scaling of a reduced Hodgkin-Huxley model ([27], Type B^{−}), but a thorough exploration of such models remains for future work.

### Neuronal relevance and related issues

Given that we can identify an optimal input structure, it is important to ask whether the similarity of actual input streams to the proposed optimum can be checked experimentally and whether such an optimal input could actually be realized by a network of neurons. As for the former question, membrane potential dynamics can be recorded, including identification of changes in potential associated with synaptic inputs, and associated synaptic input features can be estimated [28], so it appears that the relevant experimental techniques are indeed available.

As for the latter, at least certain features of the optimal inputs we have identified do appear to be achievable. In the discrete input case, consider first a pool of presynaptic neurons providing inputs with the same decay rates, with some distribution of input amplitudes. This pool could be large or small, depending on the brain area involved. The overall input to a postsynaptic cell could be tailored by adjusting the relative firing times of these presynaptic cells. For example, a critical kicks strategy not based on a minimal input threshold could be achieved if a large initial input was given, followed by a sequence of appropriately timed smaller inputs. This input pattern could be realized either through a synchronized input from a group of neurons followed by later inputs from a subset of these neurons or from a distinct group of neurons, perhaps made active by the initial large input as well. Short-term synaptic depression could play a role in this tapering of inputs within the input train. The presence of short-term plasticity would alter the set of input patterns that could be provided, but for a fixed form of postsynaptic dynamics, only certain input patterns would be near-optimal, and if short-term plasticity promoted these, then the loss of the capability to produce non-optimal patterns would be irrelevant. Similarly, neuromodulators also can modulate synaptic dynamics, in a way that differs across different cell types [29–31]. Although we have neglected short-term plasticity and neuromodulation here, considering their effects on optimal input strategies could be an interesting direction for future work. Moreover, if a given presynaptic neuron repeatedly fires in a certain timing relationship with a postsynaptic neuron under an optimal input pattern, then spike timing dependent plasticity (STDP) could also become relevant. Generally, the properties of the inputs received by a given postsynaptic neuron, or the membrane potential variations in that neuron that are induced by inputs, can vary across different situations in real neuronal networks; indeed, these inputs need not come from the same presynaptic source in different states [32]. This observation is consistent with the idea that input patterns could be tailored to achieve different functions.

Given these arguments in favor of the idea that inputs to neurons can indeed be varied in biologically reasonable ways, our work leads to several biological conclusions and predictions. First, the pattern of inputs needed to maximize firing will depend on a neuron’s intrinsic dynamics. This point has been discussed in previous theoretical work in the weak or noisy input limit (for example, [27, 33, 34]) as well as in some past experimental work [35, 36], and we expect this principle to hold quite generally. Second, there may exist an optimal timing following a spike at which the input strength needed to elicit an additional spike is minimized. Again, although we consider excitable neurons receiving strong inputs, this idea is related to past work on PRCs for oscillators receiving weak inputs; this idea differs from the concept of resonance, however, in that the optimal timing would not be solely determined by a postsynaptic neuronal intrinsic frequency. Related to this idea of optimal timing, we might predict that neurons’ afterhyperpolarization time courses would differ across brain areas that receive input streams with different characteristics (cf. [37]); that is, the intrinsic dynamics of afterhyperpolarization and network input characteristics could have co-evolved to achieve some form of optimality. Third, we would speculate that perhaps background input levels could be tailored to keep neurons in active brain regions in a regime where \delta (g) is small and hence the barrier to spiking is lowest; attention could even be related to the selection of such a state. Past results have shown that input fluctuations can establish a regime of high spike-time reliability [38, 39] or high sensitivity of firing frequency to input current strength [37, 40, 41], and tuning background input levels to facilitate or suppress postsynaptic activity is at least biologically plausible. Finally, we would also predict that excitatory postsynaptic potential time courses themselves would differ in different neuron types, activity states, developmental stages, and brain areas, as has been seen in experimental work [30, 42], in a way that is interrelated with the dynamic properties of the postsynaptic neurons involved.

Admittedly, the precise optimal input patterns that we have found in the discrete case do depend on certain aspects of the postsynaptic neuronal dynamics that might render them hard to achieve. For example, for the LIF model, we found that optimal inputs arise when v=I, but it is not clear how this information would be available to the input source. Clearly, we are considering a situation where the input source is not directly impacted by the firing of the postsynaptic neuron receiving the inputs; recurrently connected networks are outside the scope of this work. Moreover, stochastic effects could alter the details of any target spike train and synaptic release pattern. It is reasonable to expect that stochastically perturbed versions of the optimal input patterns identified for deterministic models would provide the optimal response distribution in the presence of weak noise, but careful investigation of this issue remains for future work.

## Appendix

Similar methods to those applied to the LIF model with continuous input can be used for the theta model with continuous input. Here we concisely summarize the approach and the main results. Again, we replace \gamma (t), given in equation (28), with its time average, and thus we consider the equation

with

on the time interval ({T}_{a},{T}_{b}) from one spike to the next. As \theta ({T}_{a})=-\pi and \theta ({T}_{b})=\pi,

Combining these equations yields

and spike times {T}_{i} and associated values {\overline{\gamma}}_{i} are found by solving these equations repeatedly.

Lower and upper bounds \underline{t}(\beta ), \overline{t}(\beta ) on firing times are estimated from the solutions of

since \gamma (t)>-b>0 is required for spiking to start. Technically, a single final spike can be fired if *γ* rises above −*b* and then falls below it, however, so the upper bound here is not precise.

As with the LIF model, constraints on spike times and changes in spike times with *β* can be explored with these ideas. As one example, consider time {T}_{1}(\beta ) when the first spike is fired. It can be shown that \overline{t}(\beta )\to 0 as \beta \to \infty, and hence {T}_{1}(\beta )\to 0 as \beta \to \infty as well, if it exists. This result yields

but for {T}_{a}={T}_{0}, {T}_{b}={T}_{1}, the left hand side of equation (45) becomes A-A\beta {T}_{1}{e}^{-\beta {T}_{1}}-A{e}^{-\beta {T}_{1}}<2A, a contradiction. Hence, no first spike can occur, and we conclude that spiking in the theta model with input (28) is lost as \beta \to \infty.

## References

Hubel D, Wiesel T:

**Receptive fields and functional architecture of the monkey striate cortex.***J. Physiol.*1968,**195:**215–243.Henry G, Dreher B, Bishop P:

**Orientation specificity of cells in cat striate cortex.***J. Neurophysiol.*1974,**37:**1394–1409.Simons D:

**Response properties of vibrissa units in rat S1 somatosensory neocortex.***J. Neurophysiol.*1978,**41:**798–820.Abeles M:

*Corticonics: Neural Circuits of the Cerebral Cortex*. Cambridge Univ. Press, Cambridge, UK; 1991.Abeles M, Bergman H, Margalit E, Vaadia E:

**Spatiotemporal firing patterns in the frontal cortex of behaving monkeys.***J. Neurophysiol.*1993,**70:**1629–1638.Pinto D, Brumberg J, Simons D, Ermentrout G:

**A quantitative population model of whisker barrels: re-examining the Wilson-Cowan equations.***J. Comput. Neurosci.*1996,**3:**247–264.Moehlis J, Shea-Brown E, Rabitz H:

**Optimal inputs for phase models of spiking neurons.***J. Comput. Nonlinear Dyn.*2006,**1:**358–367.Forger D, Paydarfar D:

**Starting, stopping, and resetting biological oscillators: in search of optimum perturbations.***J. Theor. Biol.*2004,**230:**521–532.Kopell N, Ermentrout G:

**Subcellular oscillations and bursting.***Math. Biosci.*1986,**78:**265–291.Ermentrout B:

**Type I membranes, phase resetting curves, and synchrony.***Neural Comput.*1996,**8:**979–1001.Hoppensteadt F, Izhikevich E:

*Weakly Connected Neural Networks*. Springer-Verlag, New York; 1997.Rubin J, Bose A:

**The geometry of neuronal recruitment.***Physica D*2006,**221:**37–57.Borgers R, Kopell N:

**Effects of noisy drive on rhythms in networks of excitatory and inhibitory neurons.***Neural Comput.*2005,**17:**557–608.Rinzel J, Ermentrout G:

**Analysis of neural excitability and oscillations.**In*Methods in Neuronal Modeling: From Ions to Networks*. Edited by: Koch C., Segev I.. The MIT Press, Cambridge, MA; 1998:251–291.Ermentrout B:

*Simulating, Analyzing, and Animating Dynamical Systems*. SIAM, Philadelphia; 2002.Gutkin B, Ermentrout G, Reyes A:

**Phase-response curves give the responses of neurons to transient inputs.***J. Neurophysiol.*2005,**94:**1623–1635.Izhikevich E:

*Dynamical Systems in Neuroscience*. MIT Press, Cambridge, MA; 2007.Canavier C:

**Analysis of circuits containing bursting neurons using phase resetting curves.**In*Bursting: The Genesis of Rhythm in the Nervous System*. Edited by: Coombes S., Bressloff P.. World Scientific, Singapore; 2006:175–200.Sherwood, W., Guckenheimer, J.: Dissecting the phase response properties of a model bursting neuron. ArXiv:0910.1970 Sherwood, W., Guckenheimer, J.: Dissecting the phase response properties of a model bursting neuron. ArXiv:0910.1970

Terman D, Kopell N, Bose A:

**Dynamics of two mutually coupled inhibitory neurons.***Physica D*1998,**117:**241–275.Rubin, J., Terman, D.: Geometric singular perturbation analysis of neuronal dynamics. In: Fiedler, B. (ed.) Handbook of Dynamical Systems: Towards Applications, vol. 2. Elsevier (2002) Rubin, J., Terman, D.: Geometric singular perturbation analysis of neuronal dynamics. In: Fiedler, B. (ed.) Handbook of Dynamical Systems: Towards Applications, vol. 2. Elsevier (2002)

Clewley R, Rotstein H, Kopell N:

**A computational tool for the reduction of nonlinear ode systems possessing multiple scales.***Multiscale Model Simul.*2005,**4:**732–759.Keener JP, Hoppensteadt FC, Rinzel J:

**Integrate-and-fire models of nerve membrane response to oscillatory input.***SIAM J. Appl. Math.*1981,**41**(3):503–517.Ermentrout G:

**phase-locking of weakly coupled oscillators.***J. Math. Biol.*1981,**12:**327–342.Morris C, Lecar H:

**Voltage oscillations in the barnacle giant muscle fiber.***Biophys. J.*1981,**35:**193–213.Wang X, Buzsaki G:

**Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model.***J. Neurosci.*1996,**16:**6402–6413.Lundstrom B, Famulare M, Sorensen L, Spain W, Fairhall A:

**Sensitivity of firing rate to input fluctuations depends on time scale separation between fast and slow variables in single neurons.***J. Comput. Neurosci.*2009,**27:**277–290.Piwkowska Z, Pospischil M, Brette R, Sliwa J, Rudolph-Lilith M, Bal T, Destexhe A:

**Characterizing synaptic conductance fluctuations in cortical neurons and their influence on spike generation.***J. Neurosci. Methods*2008,**169:**302–322.Hasselmo M:

**Neuromodulation and cortical function: modeling the physiological basis of behavior.***Behav. Brain Res.*1995,**67:**1–27.Gil Z, Connors B, Amitai Y:

**Differential regulation of neocortical synapses by neuromodulators and activity.***Neuron*1997,**19:**679–686.Cobb S, Davies C:

**Cholinergic modulation of hippocampal cells and circuits.***J. Physiol.*2005,**562:**81–88.Gentet L, Avermann M, Matyas F, Staiger J, Petersen C:

**Membrane potential dynamics of GABAergic neurons in the barrel cortex of behaving mice.***Neuron*2010,**65:**422–435.Brown E, Moehlis J, Holmes P:

**On the phase reduction and response dynamics of neural oscillator populations.***Neural Comput.*2004,**16:**673–715.Ermentrout G, Galán R, Urban N:

**Relating neural dynamics to neural coding.***Phys. Rev. Lett.*2007.,**99:**Haas J, White J:

**Frequency selectivity of layer II stellate cells in the medial entorhinal cortex.***J. Neurophysiol.*2002,**88:**2422–2429.Haas J, Dorval AD, White J:

**Contributions of Ih to feature selectivity in layer II stellate cells of the entorhinal cortex.***J. Comput. Neurosci.*2007,**22:**161–171.Higgs M, Slee S, Spain W:

**Diversity of gain modulation by noise in neocortical neurons: regulation by the slow afterhyperpolarization conductance.***J. Neurosci.*2006,**26:**8787–8799.Mainen Z, Sejnowski T:

**Reliability of spike timing in neocortical neurons.***Science*1995,**268:**1503–1506.Galán R, Ermentrout G, Urban N:

**Optimal time scale for spike-time reliability: theory, simulations, and experiments.***J. Neurophysiol.*2008,**99:**277–283.Chance F, Abbott L, Reyes A:

**Gain modulation from background synaptic input.***Neuron*2002,**35:**773–782.Arsiero M, Luscher H, Lundstrom B, Giugliano M:

**The impact of input fluctuations on the frequency-current relationships of layer 5 pyramidal neurons in the rat medial prefrontal cortex.***J. Neurosci.*2007,**27:**3274–3284.Bellone C, Nicoll R:

**Rapid bidirectional switching of synaptic NMDA receptors.***Neuron*2007,**55:**779–785.

## Acknowledgements

Some of this work was completed when JW was visiting the University of Pittsburgh, with funding from the State Scholarship Fund of China. This work was partially supported by the NSF of China Grant 10872014 (JW) and the U.S. NSF Awards DMS 0716936 and 1021701 (JR). We warmly thank Bard Ermentrout for useful discussions about the BVP for the theta model and Qishao Lu for his help with JW’s visit to the U.S.

## Author information

### Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access**
This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (
https://creativecommons.org/licenses/by/2.0
), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Wang, J., Costello, W. & Rubin, J.E. Tailoring inputs to achieve maximal neuronal firing.
*J. Math. Neurosc.* **1, **3 (2011). https://doi.org/10.1186/2190-8567-1-3

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/2190-8567-1-3

### Keywords

- Synaptic Input
- Spike Time
- Input Time
- Optimal Input
- Phase Response Curve