Explicit maps to predict activation order in multiphase rhythms of a coupled cell network
- Jonathan E Rubin^{1}Email author and
- David Terman^{2}
DOI: 10.1186/2190-8567-2-4
© Rubin, Terman; licensee Springer 2012
Received: 6 December 2011
Accepted: 4 February 2012
Published: 12 March 2012
Abstract
We present a novel extension of fast-slow analysis of clustered solutions to couplednetworks of three cells, allowing for heterogeneity in the cells’ intrinsicdynamics. In the model on which we focus, each cell is described by a pair offirst-order differential equations, which are based on recent reduced neuronalnetwork models for respiratory rhythmogenesis. Within each pair of equations, onedependent variable evolves on a fast time scale and one on a slow scale. The cellsare coupled with inhibitory synapses that turn on and off on the fast time scale. Inthis context, we analyze solutions in which cells take turns activating, allowing anyactivation order, including multiple activations of two of the cells betweensuccessive activations of the third. Our analysis proceeds via the derivation of aset of explicit maps between the pairs of slow variables corresponding to thenon-active cells on each cycle. We show how these maps can be used to determine theorder in which cells will activate for a given initial condition and how evaluationof these maps on a few key curves in their domains can be used to constrain thepossible activation orders that will be observed in network solutions. Moreover,under a small set of additional simplifying assumptions, we collapse the collectionof maps into a single 2D map that can be computed explicitly. From this unified map,we analytically obtain boundary curves between all regions of initial conditionsproducing different activation patterns.
Keywords
fast-slow analysis clustered solutions map multiphase rhythm respiration1 Introduction
The methods of fast-slow decomposition have been harnessed for the analysis of rhythmicactivity patterns in many mathematical models of single excitable or oscillatoryelements featuring two or more time scales. In the analysis of relaxation oscillations,for example, singular solutions can be formed by concatenating slow trajectoriesassociated with silent and active phases and fast jumps between these phases, and thesecan guide the study of true solutions. These methods can be productively extended tointeracting pairs of elements, particularly when the coupling between them takes certainforms. The synaptic coupling that arises in many neuronal contexts is well suited forthe use of this theory. In the case of synapses that turn on and off on the fast timescale, for example, analysis can be performed through the use of separate phase spacesfor each neuron, with synaptic inputs modifying the nullsurfaces and other relevantstructures in each phase space. This method has been used to treat pairs of neurons withslow synaptic dynamics as well, although higher-dimensional phase spaces arise.Similarly, synchronized and clustered solutions can be analyzed in model networksconsisting of multiple identical neurons if these neurons are visualized as multipleparticles in one phase space or in two phase spaces, one for active neurons and one forsilent, the membership of which will change over time. Reviews of how fast-slowdecompositions have been used to analyze neuronal networks can be found in, for example, [1, 2].
This form of analysis becomes significantly more challenging when networks of three ormore nonidentical neurons are considered. The number of variables in each slow subsystemcan become prohibitive, and if variables associated with different neurons areconsidered in separate phase spaces, then some method is still needed for the efficientanalysis of their interactions. In this study, we introduce such a method, based onmappings on slow variables, for networks in which each element is modeled with one fastvariable and one slow variable, plus a coupling variable. A strength of this method isthat, by numerically computing the locations of a few key curves in phase space, we canobtain information about model trajectories generated by arbitrary initial conditionsand determine how complex changes in stable firing patterns occur as parameters arevaried. Moreover, the formulas defining approximations to these curves, valid under asmall number of simplifying assumptions, can be expressed in an elegant analytical form.These methods are particularly tractable within networks consisting of threereciprocally coupled units, so we focus on such networks here; also, we use intrinsicdynamics arising in neuronal models, although the theory would work identically for anyqualitatively similar dynamics with two time scales.
Although three-component models arise in many applications, in neuroscience and beyond,our original motivation for this work comes from the study of networks in the mammalianbrain stem that generate respiratory rhythms [3]. A brief description of modeling work related to these rhythms is given inthe following section. This description is followed by the equations for a particularreduced model for the respiratory network that we consider. In Section 3, wepresent examples of complex firing patterns that arise as solutions to the model tomotivate the analysis that follows. We next demonstrate how fast-slow analysis can beused to derive reduced equations for the evolution of solutions during both the silentand active phases. In particular, we derive formulas for the times when each cell jumpsup and down, and determine how these times depend on parameters and initial conditions.To derive these explicit formulas, we will make some simplifying assumptions on theequations; a similar analysis could be performed numerically if such explicit formulascould not be obtained. In Section 4, we make some further simplifying assumptionsthat allow us to reduce the full dynamics to a piecewise continuous two-dimensional map.Analysis of this map helps to explain how complex transitions in stable firing patternstake place as parameters are varied. We conclude the article with a discussion inSection 5.
2 Model system
2.1 Modeling respiratory rhythms
Recent work, based on experimental observations, has modeled the respiratory rhythmgenerating network in the brain stem as a collection of four or five neuronalpopulations. Three of these groups are inhibitory and are arranged in a ring, witheach population inhibiting the other two. A fourth group, a relatively well-studiedcollection of neurons in the pre-Bötzinger Complex (pre-BötC), excites oneof the inhibitory populations, also associated with the pre-BötC, and isinhibited by the other two. Finally, some studies have included a fifth, excitatorypopulation, linked to certain other populations and likely becoming active only undercertain strong perturbations to environmental or metabolic conditions [4–8]. In addition to the synaptic inputs from other populations in the network,each neuronal group receives excitatory synaptic drives from one or more additionalsources, possibly related to feedback control of respiration (e.g., [9]). Under baseline conditions, the four core populations encompassed in thismodel generate a rhythmic output, in which the inhibitory groups take turns firingand the activity of the excitatory pre-BötC neurons slightly leads but largelyoverlaps that of the inhibitory pre-BötC cells.
In some of this work, a model respiratory network in which each population consistsof a heterogeneous collection of fifty Hodgkin-Huxley neurons was constructed andtuned to reproduce a range of experimental observations in simulations [4, 5, 7]. Achieving this data fitting presumably required a major effort to selectvalues for the many unknown parameters in the model. A reduced version of this modelnetwork, in which each population was modeled by a single coupled pair of ordinarydifferential equations, was also developed and, after parameter tuning, some analysiswas performed to describe its activity in terms of fast and slow dynamics andtransitions by escape and release [6, 8]. Although the reduced population model involves far fewer free parametersthan the Hodgkin-Huxley type model, it still includes coupling strengths between allthe synaptically connected populations, drive strengths, and adaptation time scales,among others, amounting collectively to a many-dimensional parameter space. Thus,selecting parameter values for which model behavior matches experimental findings anddetermining which parameter values produce what forms of dynamics representburdensome numerical tasks. These challenges are significantly complicated by thepossibility of multistability, as different initial conditions could lead todifferent solutions for each parameter set.
The method that we present in this study has been developed to aid in the analyticalstudy of solutions of networks like the reduced respiratory population model. To makethe presentation concrete, we present our results in terms of this model. Since twoof the four active populations relevant to the normal respiratory rhythm, those inthe pre-BötC, activate in near-synchrony, we will treat these as a singlepopulation and consider a three population network. The activity of one of the keyrespiratory brain stem populations depends on a persistent sodium current [10–13], while the other active populations feature an adaptation current instead [5, 6]. In the three population model that we use, we include this heterogeneityto illustrate that the theory handles heterogeneity easily, to distinguish one of thepopulations from the other two for ease of presentation of part of the theory, and tomaintain a strong connection with the respiratory application.
2.2 The equations
Differentiation is with respect to time t, and ϵ is a small,positive parameter that we have introduced for notational convenience. In [6, 8], each v variable denotes the average voltage over a synchronizedneuronal population, h is the inactivation of a persistent sodium currentfor members of the inspiratory pre-BötC population, and the ${m}_{i}$ represent the activation levels of an adaptation current for twoother respiratory populations; however, each variable could just as easily representanalogous quantities for a single neuron.
where C is membrane capacitance and ${I}_{NaP}(v,h)={g}_{NaP}{m}_{{p}_{\mathrm{\infty}}}(v)h(v-{V}_{Na})$, ${I}_{Kdr}(v)={g}_{Kdr}{n}_{\mathrm{\infty}}^{4}(v)(v-{V}_{K})$, ${I}_{L}(v)={g}_{L}(v-{V}_{L})$, and ${I}_{ad}(v,m)={g}_{ad}m(v-{V}_{K})$ represent persistent sodium, potassium, leak and adaptationcurrents, respectively. In each of these currents, the g parameter denotesconductance and the V parameter is the current’s reversal potential.We use the standard convention of representing ${I}_{NaP}$ and ${I}_{Kdr}$ activation as sigmoidal functions of voltage v, ${m}_{{p}_{\mathrm{\infty}}}(v)$ and ${n}_{\mathrm{\infty}}(v)$, respectively. The coupling function in system (1) is given by ${S}_{\mathrm{\infty}}(v)=1//\{1+exp[(v-{\theta}_{I})/{\sigma}_{I}]\}$, which closely approximates a Heaviside step function due to thesmall size of ${\sigma}_{I}$ and which is multiplied by a strength factor b each timeit appears. The final term, ${g}_{E}{d}_{i}({v}_{i}-{V}_{E})$, in each voltage equation represents a tonic synaptic drive from afeedback population; the strength factors ${d}_{i}$ could change with changing metabolic or environmental conditions,but we treat them as constants in this article. Additional details about thefunctions in (1) and (2), as well as parameter values used, are given inAppendix 1. Appendix 2 also presents a general list of assumptions,satisfied by (1), (2) with the parameter values used, under which our theoreticalmethods will work.
3 Fast-slow analysis
3.1 Introduction
We analyze solutions using fast-slow analysis. The basic idea is that the solutionevolves on two different time scales: During the jumps up and down, thesolution evolves on a fast time scale, while during the silent and activephases, the solution evolves on a slow time scale. The fast-slow analysis allows usto derive reduced equations that determine the evolution of the solution during eachof these phases. In particular, we derive explicit formulas for the times when eachcell jumps up and down and use these to determine the outcomes of the races tothreshold, depending on parameters and initial conditions. To derive these formulas,we will make some simplifying assumptions on the equations; in situations in whichsuch formulas cannot be obtained, then a similar analysis can be donenumerically.
3.2 Slow and fast equations
Parameter values for full model and singular limit simulations and singularlimit analysis corresponding to Figure 3
Conductances (nS) | Reversal potentials (mV) | Half activations (mV) | Slopes | Time constants (ms) | Coupling constants | Other |
---|---|---|---|---|---|---|
${g}_{NaP}=0.25$ | ${V}_{Na}=50$ | ${\theta}_{h}=-48$ | ${\sigma}_{h}=3$ | ${\tau}_{a,h}=9.5$ | ${b}_{12}=0.4$ | ϵ = 0.01 |
${g}_{Kdr}=0.25$ | ${V}_{K}=-85$ | ${\theta}_{n}=-30$ | ${\sigma}_{n}=-4$ | ${\tau}_{a,2}=30$ | ${b}_{13}=0.4$ | $C=1\text{pF}$ |
${g}_{ad}=0.5$ | ${\theta}_{m}=-36$ | ${\sigma}_{m}=-{10}^{-1}$ | ${\tau}_{a,3}=45$ | ${b}_{21}=0.2$ | ${d}_{1}=0.21$ | |
${g}_{L}=0.14$ | ${V}_{L}=-60$ | ${\theta}_{{m}_{p}}=-50$ | ${\sigma}_{{m}_{p}}=-{10}^{-1}$ | ${\tau}_{b,h}=-4.5$ | ${b}_{23}=0.24$ | ${d}_{2}=0.73$ |
${g}_{I}=3.0$ | ${V}_{I}=-75$ | ${\theta}_{h}^{\tau}=-48$ | ${\sigma}_{h}^{\tau}=-{10}^{-2}$ | ${\tau}_{b,2}=-10$ | ${b}_{31}=0.3$ | ${d}_{3}=1.4$ |
${g}_{E}=0.5$ | ${V}_{E}=0$ | ${\theta}_{2}^{\tau}=0$ | ${\sigma}_{2}^{\tau}={10}^{-1}$ | ${\tau}_{b,3}=-32.3$ | ${b}_{32}=0.25$ | ${\theta}_{I}=-32$ |
${\theta}_{3}^{\tau}=0$ | ${\sigma}_{3}^{\tau}={10}^{-1}$ | ${\sigma}_{I}=-{10}^{-1}$ | ||||
Singular limit parameter values | ${\sigma}_{L}=\frac{1}{950}$ | ${\sigma}_{R}=\frac{1}{500}$ | ${\lambda}_{L}=\frac{1}{2\text{,}000}$ | ${\lambda}_{R}=\frac{1}{2\text{,}000}$ | ${\mu}_{L}=\frac{1}{1\text{,}270}$ | ${\mu}_{R}=\frac{1}{1\text{,}270}$ |
Conductances (nS) | Reversal potentials (mV) | Half activations (mV) | Slopes | Time constants (ms) | Coupling constants | Other |
---|---|---|---|---|---|---|
${g}_{NaP}=0.25$ | ${V}_{Na}=50$ | ${\theta}_{h}=-48$ | ${\sigma}_{h}=3$ | ${\tau}_{a,h}=5$ | ${b}_{12}=0.4$ | ϵ = 0.01 |
${g}_{Kdr}=0.25$ | ${V}_{K}=-85$ | ${\theta}_{n}=-30$ | ${\sigma}_{n}=-4$ | ${\tau}_{a,2}=35$ | ${b}_{13}=0.5$ | $C=1\text{pF}$ |
${g}_{ad}=0.6$ | ${\theta}_{m}=-40$ | ${\sigma}_{m}=-{10}^{-3}$ | ${\tau}_{a,3}=20$ | ${b}_{21}=0.3$ | ${d}_{1}=0.55$ | |
${g}_{L}=0.2$ | ${V}_{L}=-60$ | ${\theta}_{{m}_{p}}=-40$ | ${\sigma}_{{m}_{p}}=-{10}^{-2}$ | ${\tau}_{b,h}=-1.5$ | ${b}_{23}=0.5$ | ${d}_{2}=1.4$ |
${g}_{I}=3.0$ | ${V}_{I}=-75$ | ${\theta}_{h}^{\tau}=-48$ | ${\sigma}_{h}^{\tau}=-{10}^{-2}$ | ${\tau}_{b,2}=0$ | ${b}_{31}=0.3$ | ${d}_{3}=1.5$ |
${g}_{E}=0.4$ | ${V}_{E}=0$ | ${\theta}_{2}^{\tau}=-40$ | ${\sigma}_{2}^{\tau}={10}^{-3}$ | ${\tau}_{b,3}=0$ | ${b}_{32}=2.0$ | ${\theta}_{I}=-40$ |
${\theta}_{3}^{\tau}=-40$ | ${\sigma}_{3}^{\tau}={10}^{-3}$ | ${\sigma}_{I}=-{10}^{-3}$ | ||||
Singular limit parameter values | ${\sigma}_{L}=\frac{1}{500}$ | ${\sigma}_{R}=\frac{1}{350}$ | ${\lambda}_{L}$: see text | ${\lambda}_{R}=\frac{1}{3\text{,}500}$ | ${\mu}_{L}$: see text | ${\mu}_{R}=\frac{1}{2\text{,}000}$ |
3.3 The race
As described above, when one of the cells jumps down, there is a race to see which ofthe other cells reaches threshold first and then inhibits the other cells. Here wederive formulas that determine which cell wins the race to threshold.
First suppose that cell 1 jumps down from the active phase and releases cells 2 and 3from inhibition. We need to determine the times it takes for the membrane potentialsof these two cells to reach the synaptic threshold. While jumping up, these membranepotentials satisfy (8), so once we determine the initial conditions ${v}_{k}(0)$, $k=2,3$, we can solve for the jump-up times.
Now, cell 1 will either win or lose the race, if either ${t}_{1k}(h)<{t}_{jk}({m}_{j})$ or ${t}_{1k}(h)>{t}_{jk}({m}_{j})$, respectively. Each equation ${t}_{1k}(h)={t}_{jk}({m}_{j})$ defines a curve in the $(h,{m}_{j})$ plane, which we denote as ${\mathcal{C}}_{1j}$. These curves are also shown in Figure 3,where we numerically solved for ${\mathcal{C}}_{12}$ and ${\mathcal{C}}_{13}$. Note that points above the curve ${\mathcal{C}}_{1j}$ correspond to cell 1 winning the race and points below this curvecorrespond to cell j winning the race.
3.4 Predicting jumping sequences
We now construct six 2D maps, ${\mathrm{\Pi}}_{ij}$, that allow us to predict the order in which the cells jump up anddown, to and from the active phase. To explain what these maps are, suppose thati,j and k are the cells’ distinct indices and,for convenience, temporarily let ${s}_{1}=h$, ${s}_{2}={m}_{2}$, ${s}_{3}={m}_{3}$ denote the slow variables for the three cells. If, at some time,cell i jumps down and cell k jumps up, then we will define a map ${\mathrm{\Pi}}_{ik}$ from the $({s}_{j},{s}_{k})$ phase plane to the $({s}_{i},{s}_{j})$ phase plane that gives the position of $({s}_{i},{s}_{j})$ when cell k jumps down. We can determine the next cell tojump up, once cell k jumps down, by comparing the position of ${\mathrm{\Pi}}_{ik}({s}_{j},{s}_{k})$ to that of ${\mathcal{C}}_{ij}$. For example, suppose that cell 1 jumps down. Then either cell 2 orcell 3 will jump up depending on whether $({m}_{2},{m}_{3})$ lies above or below the curve ${\mathcal{C}}_{23}$, respectively. If cell 2 jumps up, then the map ${\mathrm{\Pi}}_{12}({m}_{2},{m}_{3})$ gives the position of $(h,{m}_{3})$ when cell 2 jumps down. This position, in turn, determines whethercell 1 or cell 3 is the next cell to jump up; that is, cell 1 or cell 3 is the nextcell to jump up if $(h,{m}_{3})={\mathrm{\Pi}}_{12}({m}_{2},{m}_{3})$ lies above or below ${\mathcal{C}}_{13}$, respectively. Continuing in this way - comparing the output of themaps to the location of curves ${\mathcal{C}}_{ij}$ - we can determine the cells’ jumping sequences.
We derive explicit formulas for the six maps ${\mathrm{\Pi}}_{ij}$. The first step is to determine the value of the slow variable forcell i when cell i jumps down. We claim that there exist uniqueconstants ${s}_{i}^{\ast}$ so that cell i jumps down when ${s}_{i}={s}_{i}^{\ast}$; see Figure 2, where ${s}_{1}^{\ast}={h}^{\ast}$, ${s}_{2}^{\ast}={m}_{2}^{\ast}$ and ${s}_{3}^{\ast}={m}_{3}^{\ast}$. These constants exist and are unique because: (i) celli jumps down when it is in the active phase with ${v}_{i}={\theta}_{I}$; (ii) while cell i is in the active phase, $({v}_{i},{s}_{i})$ lies along the right branch of the ${v}_{i}$-nullcline, $\{({v}_{i},{s}_{i}):{F}_{i}({v}_{i},{s}_{i})-{g}_{E}{d}_{i}({v}_{i}-{V}_{E})=0\}$; and (iii) each of these right branches is monotone increasing ordecreasing. This last statement can be verified for the concrete model (1) given inSection 2 by explicitly solving for each ${s}_{i}$ in terms of ${v}_{i}$. However, this monotonicity is also present in most reduced modelsfor neuronal activity.
for $j\in \{2,3\}$.
This constraint is appropriate because if, for example, ${m}_{2}>{m}_{2}^{\ast}$, then once cell 2 is released from inhibition and jumps up, it cannever reach the threshold ${v}_{2}={\theta}_{I}$.
If cell i jumps down at time 0 and the inputs to the map specify that cellj jumps next, then the location of the coordinate determined by theoutputs of ${\mathrm{\Pi}}_{ij}$, relative to the curve ${\mathcal{C}}_{ik}$, determines whether cell i or cell k will followcell j into the active phase.
Taken collectively, the curves and maps defined in this section gives us a completeview of the possible jump sequences that system (1) can generate, at least ifϵ is small enough to justify the fast-slow decomposition that wehave used. Consider the regions in the $({m}_{2},{m}_{3})$, $(h,{m}_{2})$, and $(h,{m}_{3})$ phase planes that satisfy (19) and (22). Within the $({m}_{2},{m}_{3})$ plane, assume that the curve ${\mathcal{C}}_{23}$ intersects the relevant region; otherwise, cell 1 will always befollowed by the same other cell. The map ${\mathrm{\Pi}}_{12}$ takes the region above the curve to a set in the $(h,{m}_{3})$ plane and the map ${\mathrm{\Pi}}_{13}$ takes the region below the curve to a set in the $(h,{m}_{2})$ plane, with similar actions for ${\mathrm{\Pi}}_{21}$, ${\mathrm{\Pi}}_{23}$, ${\mathrm{\Pi}}_{31}$, ${\mathrm{\Pi}}_{32}$ on the other planes. Since the solutions to the ODEs we considerare continuous in initial conditions, the maps take connected regions into connectedregions, and thus we only need to consider the actions of the maps on theregions’ boundaries in order to determine the possible next outcomes from agiven starting point. For a particular parameter set, repeated iteration of the mapsmay show convergence to a single attracting jump sequence or may otherwise constrainthe jump orders that are possible. Alternatively, inverses of the maps can be easilydefined using the backwards flow of the ODEs, and repeated iterations of the inversesof the maps, applied to some selected region in one of the phase planes, show whichsets contain initial conditions that could end up in the selected region.
3.5 Numerical examples
We now use numerical computations, performed with MATLAB and XPPAUT(http://www.pitt.edu/~phase), to illustrate the theory from theprevious subsections. Figure 3 shows curves and regions in eachof the 2D phase planes associated with pairs of slow variables of model (1). Thesestructures were generated by starting from the full model, with function andparameter values given in the Appendix (see Table 1), andmaking the simplifying assumptions described above for the $\u03f5=0$ limit (including adjusting ${\theta}_{m}$ to −54 mV from −50 mV to compensate for the switch froma smooth function to a Heaviside in the singular limit). In each panel, the relevantregion can be defined using (19), (22), and the dashed straight line segments areboundaries of this region, each corresponding to ${h}^{\ast}$, ${m}_{2}^{\ast}$, or ${m}_{3}^{\ast}$. Within each region, there is a curve ${\mathcal{C}}_{ij}$ that separates initial conditions that lead to different jumpingoutcomes, as discussed above. These curves are drawn in the same color as theboundary lines. For example, in Figure 3A, the solid blue curvein the $({m}_{2},{m}_{3})$ plane is ${\mathcal{C}}_{23}$. If $({m}_{2},{m}_{3})$ lies in the region ${\mathcal{R}}_{12}$, bounded below by ${\mathcal{C}}_{23}$, above by the dashed blue line, and to the left by the ${m}_{3}$-axis, at the moment when cell 1 jumps down, then cell 2 jumps upnext and ${\mathrm{\Pi}}_{12}({m}_{2},{m}_{3})$ is defined, while a value of $({m}_{2},{m}_{3})$ in the analogous region ${\mathcal{R}}_{13}$ below ${\mathcal{C}}_{23}$ yields a jump by cell 3, characterized by ${\mathrm{\Pi}}_{13}({m}_{2},{m}_{3})$. Similar regions are indicated in black in the $(h,{m}_{2})$ plane in Figure 3B and in red in the $(h,{m}_{3})$ plane in Figure 3C.
Consider again the $({m}_{2},{m}_{3})$ plane shown in Figure 3A. The region ${\mathcal{R}}_{12}$ is mapped by ${\mathrm{\Pi}}_{12}$ to a connected region in the $(h,{m}_{3})$ plane. In Figure 3C, we represent part of theboundary of ${\mathrm{\Pi}}_{12}({\mathcal{R}}_{12}):=\{{\mathrm{\Pi}}_{12}({m}_{2},{m}_{3}):({m}_{2},{m}_{3})\in {\mathcal{R}}_{12}\}$ with blue curves, carrying over the coloring of ${\mathcal{R}}_{12}$ from Figure 3A. Similarly, a region ${\mathcal{R}}_{32}$ below ${\mathcal{C}}_{12}$ in the $(h,{m}_{2})$ plane in Figure 3B also yields jumping bycell 2 and is mapped by ${\mathrm{\Pi}}_{32}$ to a connected region in the $(h,{m}_{3})$ plane. We indicate this region with black boundary curves in Figure3C, carrying over the coloring from Figure 3B. The regions outlined in black and blue in the $(h,{m}_{3})$ plane share a common boundary, corresponding to the condition that $(h,{m}_{3})=({h}^{\ast},{m}_{3}^{\ast})$ when cell 2 jumps up. We use a dashed black line to denote thiscommon boundary in Figure 3C (by arbitrary convention, we colorthe dashed line to match the upper set). Now, the entire regions outlined in blue andblack in the $(h,{m}_{3})$ plane lie below the red curve ${\mathcal{C}}_{13}$ (Figure 3C). Thus, we immediately know that,no matter what happened before, cell 3 will win the race and jump up when cell 2jumps down. Similarly, in the $({m}_{2},{m}_{3})$ plane shown in Figure 3A, the black-boundedregion ${\mathrm{\Pi}}_{31}({\mathcal{R}}_{31})$ and the red-bounded region ${\mathrm{\Pi}}_{21}({\mathcal{R}}_{21})$ lie entirely below ${\mathcal{C}}_{23}$, and therefore cell 3 will always jump up after cell 1 as well.
possibly discarding a brief transient.
We selected various values of $({m}_{2},{m}_{3})$ constrained by (19) and we used each as an initial condition,assuming that cell 1 jumped down from the active phase at time 0. From each startingpoint, we repeatedly solved for the times involved in the race to jump up, usingEquations (11), (14), and (15). We found that the trajectory emerging from eachinitial condition converged to the same attractor, with a jump sequence13231323… . This attractor is illustrated with filled circles in Figure3; the black circle in Figure 3A ismapped by ${\mathrm{\Pi}}_{13}$ to the blue circle in Figure 3B, which ismapped by ${\mathrm{\Pi}}_{32}$ to the black circle in Figure 3C, which ismapped by ${\mathrm{\Pi}}_{23}$ to the red circle in Figure 3B, which ismapped by ${\mathrm{\Pi}}_{31}$ back to the original black circle in Figure 3A. Note that the next jump predicted by the location of each circle matchesthat which actually occurs. Also, a subtle point arises because the hcoordinate of the red circle is large. From this starting point, when cell 1 jumpsup, it spends a long time in the active phase (large ${T}_{1}^{A}$), almost as long as if it started from $h=1$. During this time, trajectories in the $({m}_{2},{m}_{3})$ plane with ${m}_{3}(0)={m}_{3}^{\ast}$ and different initial values of ${m}_{2}$ get compressed; see Equation (5). Thus, the black circle in Figure3A ends up very close to the corner of the black region,which corresponds to ${\mathrm{\Pi}}_{31}(1,{m}_{2}^{\ast})$.
We also performed direct numerical simulations of system (1), using steep but smoothsigmoidal functions instead of Heaviside functions for ${m}_{\mathrm{\infty}}(v)$, ${n}_{\mathrm{\infty}}(v)$, and ${S}_{\mathrm{\infty}}(v)$, as described in the Appendix. These simulations also gave a13231323…firing pattern, as predicted by the analysis. We defined firingtransitions in these simulations using voltage decreases through $-33\text{mV}$ (the half-activation of the synaptic function ${S}_{\mathrm{\infty}}(v)$ was set to $-32\text{mV}$ to agree with ${\theta}_{I}$). We allowed the system to converge to its stable firing patternand then plotted the slow variable coordinates at these firing transitions as opencircles in the corresponding panels of Figure 3. Thesecoordinates agree well with the singular limit analysis.
In addition to the solid and open circles corresponding to the attractors in thesingular limit and full simulations, respectively, certain points associated withtransients are also plotted in Figure 3. An example of atransient 1,3,1,3 firing sequence found with the singular limit formulas, which ledto a subsequent 2313231323…activation pattern, is marked with the blueasterisks in Figure 3A,B. In this example, initial conditionswere chosen such that cell 1 jumped down with $({m}_{2},{m}_{3})=(0.29,0.6)$, indicated by the rightmost asterisk in Figure 3A (label 1). Since the asterisk is below the blue solid curve ${\mathcal{C}}_{23}$ in the plane shown, cell 3 jumps next. Obviously, the image of theinitial point under ${\mathrm{\Pi}}_{13}$ must lie in the range of ${\mathrm{\Pi}}_{13}$ in the $(h,{m}_{2})$ plane, which is bounded to the left, below and to the right bysolid blue curves and above by a dashed red curve. We observe (Figure 3B, label 2) that this image lies at about $(h,{m}_{2})=(0.51,0.24)$, which is indeed in the relevant region but also is above the blacksolid curve ${\mathcal{C}}_{12}$, meaning that cell 1 jumps up next. The image of $(h,{m}_{2})$ under ${\mathrm{\Pi}}_{31}$ is marked by the other asterisk in Figure 3A(label 3), which lies below ${\mathcal{C}}_{23}$ such that cell 3 jumps again after cell 1. Finally, the image ofthat point under ${\mathrm{\Pi}}_{13}$ is labeled by the other asterisk in Figure 3B(label 4); since that point is below the black curve ${C}_{12}$, cell 2 finally gets to fire after this second activation of cell3.
We also obtained a similar 1,3,1,3 transient in full model simulations correspondingto the singular limit analysis. To match the singular limit, we used $({m}_{2},{m}_{3})=(0.29,0.6)$ as our initial condition, with ${v}_{1}=-33\text{mV}$ and $h={h}^{\ast}$ such that time 0 represented the beginning of the jump down of cell1. This point and the slow variable values at the next 3 jump down transitions aremarked with red open squares in Figure 3. By construction, thered open square at label 1 lies in the same position as the blue asterisk there. Therest of these markers, near labels 2,3,4, lie quite close to the blue asterisks,showing that, in addition to correctly predicting the jumping sequence, the singularlimit analysis gives good estimates to the slow variable values at jumping times inthe original system, although the agreement is not perfect since ϵ isnonzero in the original system and our analysis replaces sigmoidal activation andcoupling functions by step functions.
4 From six maps to one
4.1 Derivation of the map
We now present a somewhat different approach. Previously, we considered the sixseparate maps between the three different 2D slow phase planes, $(h,{m}_{2})$, $(h,{m}_{3})$, and $({m}_{2},{m}_{3})$. Here, we demonstrate that it is possible to use these six maps toreduce the dynamics to a single map, defined from some subset of the $({m}_{2},{m}_{3})$ phase plane into itself. Moreover, with some simplifyingassumptions, we will derive an explicit formula for the map.
In other words, iterates of Π keep track of the positions of $({m}_{2},{m}_{3})$ every time that cell 1 jumps down from the active phase.
We can obtain explicit formulas for this map if we assume that the slow variablessatisfy (4), (5), and (6). Different sets of formulas will be relevant on the regions ${\mathcal{R}}_{12}$ or ${\mathcal{R}}_{13}$, above or below ${C}_{23}$ respectively, corresponding to whether cell 2 or cell 3 wins therace and jumps up first when cell 1 jumps down. We can subdivide each of theseregions based on the number of times that cells 2 and 3 take turns firing after cell1 jumps down, before cell 1 jumps up again. On each of these subregions of the $({m}_{2},{m}_{3})$ phase plane, a different formula applies. Here we derive theformulas for the case in which cell 2 jumps up at $\tau =0$ when cell 1 jumps down. Formulas for the case in which cell 3 jumpsup at $\tau =0$ are derived in a similar manner. First we derive the formulas forthe map Π and then determine for which region of the $({m}_{2},{m}_{3})$ phase plane each component of the formula is valid.
Recall that cells 2 and 3 may take turns firing for $0\le \tau <T$. Let ${N}_{2}$ and ${N}_{3}$ be the number of times that cells 2 and 3, respectively, jump upduring this time interval. We note that either the two cells fire the same number oftimes, in which case ${N}_{3}={N}_{2}$, or cell 2 fires one more time than cell 3, in which case ${N}_{3}={N}_{2}-1$. Using the definitions and notation described in the precedingsection, we find that:
We derive explicit formulas for these maps using the formulas for ${\mathrm{\Pi}}_{ij}$ derived in the preceding section. In what follows, we use thenotation $\mathrm{\Pi}({m}_{2},{m}_{3})=({\stackrel{\u02c6}{m}}_{2},{\stackrel{\u02c6}{m}}_{3})$, and we employ the time constants ${\sigma}_{L}$, ${\sigma}_{R}$, ${\lambda}_{L}$, ${\lambda}_{R}$, ${\mu}_{L}$ and ${\mu}_{R}$ introduced in Section 3.2. The formulas are derived by directcalculations; we first consider two simple cases, before presenting the generalformulas. For these formulas, recall that ${h}^{\ast}$ denotes the value of h attained when cell 1 is about tojump down (i.e., cell 1 is active, cell 1 is not inhibited, and ${v}_{1}={\theta}_{I}$, see Figure 2A); similarly, ${m}_{2}^{\ast}$, ${m}_{3}^{\ast}$ denote the values of ${m}_{2}$, ${m}_{3}$ when cell 2 or cell 3 is about to jump down (${v}_{2}={\theta}_{I}$, ${v}_{3}={\theta}_{I}$), respectively.
Case 1: ${N}_{2}=1$ , ${N}_{3}=0$ .
To achieve ${N}_{3}=0$, we need that cell 1, not cell 3, jumps up when cell 2 jumps down.From the earlier discussion, this is true if $({h}^{1},{m}_{3}^{1})$ lies above the curve ${\mathcal{C}}_{13}$. Together with (24), this criterion leads to a condition on $({m}_{2},{m}_{3})$, which defines a region in the $({m}_{2},{m}_{3})$ plane where this case occurs. One could numerically compute thisregion using the definition of ${\mathcal{C}}_{13}$ given in the preceding section. Alternatively, we will now make asimplifying assumption that allows us to compute this region analytically. Thevalidity of this assumption will be confirmed by comparing the firing sequence of thefull model with that predicted by the analysis in the examples in the followingsection.
Our simplifying assumption can be described as follows: Suppose that at some time,say $t=0$, cell 1 lies in the silent phase and is released from inhibition(by either cell 2 or cell 3). We assume that the time it takes cell 1 to jump up andreach the threshold ${\theta}_{I}$ is independent of $h(0)$. It follows from this assumption that the curves ${\mathcal{C}}_{12}$ and ${\mathcal{C}}_{13}$ are horizontal; that is, they can be written as ${m}_{2}={M}_{2}$ and ${m}_{3}={M}_{3}$ for some constants ${M}_{2}$ and ${M}_{3}$.
Here, the superscript ‘2’ reflects that cell 2 jumps up when cell 1jumps down, while the subscript ‘1’ corresponds to the number of jumpsthat follow before cell 1 jumps up again (i.e., ${N}_{2}+{N}_{3}=1$). There is another curve, given by ${m}_{2}={\mathcal{K}}_{1}^{3}({m}_{3})$, corresponding to cell 3 jumping up when cell 1 jumps down. Theformula for ${\mathcal{K}}_{1}^{3}$ is derived in a similar manner, and ${\mathcal{K}}_{1}^{3}\subset {\mathcal{R}}_{13}$, below ${\mathcal{C}}_{23}$.
Case 2: ${N}_{2}=1$ , ${N}_{3}=1$ .
Formulas (30) and (31) hold only if cells 2 and 3 take turns firing ${N}_{2}$ and ${N}_{3}$ times, respectively, before cell 1 finally jumps up. As before, wecan use the explicit formulas for ${h}^{k}$, ${m}_{2}^{k}$, ${m}_{3}^{k}$ to derive explicit conditions on the initial point $({m}_{2},{m}_{3})$ for when this is true. We do not give the explicit general formulahere. In the following section, we consider concrete examples and will give theformulas needed for the analysis of those examples.
4.2 Numerical examples
Next consider Figure 6B. Now $({\lambda}_{L},{\mu}_{L})=(1/4\text{,}200,1/1\text{,}700)$. As before, when cell 1 jumps down at the red circle marked by thearrow, $({m}_{2},{m}_{3})$ lies below ${\mathcal{C}}_{23}$, so cell 3 jumps up when cell 1 jumps down. However, now ${m}_{2}<{k}_{2}^{3}({m}_{3})$. According to the theory, this relation implies that after cell 3jumps down, cell 2 jumps up and down, and then cell 3 does the same again before cell1 jumps up, as observed in the simulation. We note that for this example, ${\mathcal{K}}_{3}^{3}({m}_{3})<0$, so cell 3 can fire no more than two times between firings of cell1. Note that the firing order of the attractor in Figures 5Band 6B, namely 1323, matches that shown in Figure 3.
For Figure 6C, $({\lambda}_{L},{\mu}_{L})=(1/4\text{,}500,1/2\text{,}000)$. Once again, we start at the red circle indicated by the arrow whencell 1 jumps down. At that time, $({m}_{2},{m}_{3})$ lies below ${\mathcal{C}}_{23}$ and above ${\mathcal{K}}_{1}^{3}$; that is, ${m}_{2}>{k}_{1}^{3}({m}_{3})$. Thus, we expect that cell 3 jumps up and then cell 1 jumps downagain without any jumps by cell 2, and that is what is observed numerically along thetrajectory from the initial red circle to the green circle to the next red circle(the 131 part of the solution). Now, this next red circle lies above ${\mathcal{C}}_{23}$. Thus, the next cell to jump should be cell 2, as is seen in thefigure by following the trajectory forward again. It turns out that at that secondred circle, ${k}_{2}^{2}({m}_{2})<{m}_{3}<{k}_{1}^{2}({m}_{2})$ (not shown in the figure), which implies that cell 3 follows cell 2before cell 1 jumps down yet again (the 231 part of the solution following theinitial 131 part). Finally, when cell 1 jumps down for the third time, thecorresponding red circle lies between ${\mathcal{K}}_{2}^{2}$ and ${\mathcal{K}}_{1}^{2}$, with ${k}_{2}^{2}({m}_{3})<{m}_{2}<{k}_{1}^{2}({m}_{3})$, as can be seen in Figure 6C. This relationimplies that cell 3 and then cell 2 jump after cell 1, yielding the final 23 part ofthe solution before the trajectory returns to the initial red circle and the wholepattern repeats.
Finally, consider Figure 6D. Here, $({\lambda}_{L},{\mu}_{L})=(1/4\text{,}500,1/1\text{,}800)$. As with each of these examples, the curves ${\mathcal{C}}_{23}$, ${\mathcal{K}}_{1}^{2}$ and ${\mathcal{K}}_{2}^{2}$ (and similarly ${\mathcal{K}}_{1}^{3}$, ${\mathcal{K}}_{2}^{3}$ on the other side of ${\mathcal{C}}_{23}$) divide the phase plane into separate regions. These regionsdetermine how many times cells 2 and 3 take turns firing between the firings of cell1.
5 Discussion
We have presented a method for predicting the order with which model neurons orpopulations of synchronized neurons, arranged in a mutually inhibitory ring, willactivate. We have derived and illustrated the method for a network of three cells, eachwith 2D intrinsic dynamics, motivated by models for rhythm-generating circuits in themammalian respiratory brain stem [4–6]. Our approach involves the derivation of explicit formulas that can be usedto partition reduced phase spaces into regions leading to different firing sequences.These ideas require a decomposition of dynamics into two distinct time scales. We haveassumed an explicit fast-slow decomposition of the model equations for each neuron, intoa fast voltage equation and a slow gating variable equation, with similar time scalespresent across all neurons, but we expect that the results would extend to other casesinvolving drift along slow manifolds alternating with fast jumps between manifolds yetlacking this explicit decomposition. A powerful aspect of the approach is that mappingfrom one activation to the next only requires evaluation of our formulas on a smallnumber of curves in a particular reduced phase space. Moreover, if the images of thesecurves do not intersect the partition curves in the appropriate image space, then we canconclude that certain neurons will always become active in a fixed order, possibly aftera short transient. Our formulas involve the time that it takes each neuron’svoltage to jump up to threshold upon release from inhibition. With the additionalassumption that, for a particular cell in the network, this time does not depend on thecell’s slow variable in the silent phase, we obtain an especially strong result.That is, from a starting configuration with the distinguished cell at the end of anactive phase, we arrive at a collection of closed form expressions that can be computediteratively to determine, for all possible initial values of the other two cells’slow variables, exactly how many times the other two cells will take turns activatingbefore the distinguished cell activates again. We note that our additional assumption isreasonable for slow variables modulating currents that act predominantly to sustain orterminate activity. Finally, by observing the effects of parameters on the formulas thatwe obtain, we can determine how changes in parameters will alter model solutions, as wehave demonstrated.
Interestingly, in the examples that we show and others that we have explored, thetrajectories of the model system that we have considered tend to settle to oneparticular attractor for each parameter set. This lack of bistability likely stems fromthe fact that when each neuron is active, the other two neurons in the system experiencea strong, common inhibitory signal, albeit with different strengths, and the fact thatthe neurons’ intrinsic dynamics is low-dimensional. It is well known that commoninhibition can be strongly synchronizing in neuronal models (e.g., [1, 2, 14–18]). The model that we consider has rapid onset of inhibition, which preventssynchronization, but the strong inhibition is nonetheless able to quickly compresstrajectories associated with different initial conditions towards similar paths throughphase space. Perhaps evolutionary pressures conspire to steer dynamics of respiratoryrhythm-generators away from regimes supporting bistability, to maintain a stablerespiratory rhythm that adjusts smoothly to changes in environmental or metabolicdemands. Other recent work has also been directed towards reduced descriptions thatyield complete information about possible attractors in networks that are similar to theone we consider but tend to support multistability [19–21]. For example, trajectories can be generated for Poincaré maps based onphase lags, also under the assumption that units activate via release from inhibition,with fixed points corresponding to periodic states [22]. While that approach can handle high-dimensional dynamics and gives a rathercomplete description of how phase relations between units evolve, it requires that allcells fire before any cell fires twice and it is computationally intensive relative toour method, with additional computation needed for networks with strong coupling orsignificant asymmetries.
Previous work has presented analytical methods based on a fast-slow decomposition forsolutions of model neuronal networks featuring two interacting populations, eachsynchronized, with different forms of intrinsic dynamics or two or more synchronizedclusters of neurons within one population (e.g., [1, 2, 23–25]). The methods in this article provide tools for dealing with multipledifferent forms of dynamics. They are particularly well suited for three-populationnetworks with 2D intrinsic dynamics as presented in this article, and a set of generalassumptions that are sufficient for the method to apply are presented in the Appendix.In more complicated settings, the subspaces of slow variables that we consider wouldbecome higher-dimensional, such that while the same theory would apply, its applicationwould be more cumbersome. Another direction for future consideration is the analysis ofsolutions in which suppressed neurons may escape from the silent phase, rather thanbeing released from inhibition. Such solutions are qualitatively different than what weconsider in this article, because the race to escape would take place within the slowdynamics. Similar issues have been considered previously in the context of thebreak-down of synchronization and the development of clustered solutions within a singlepopulation [21, 25–27], and with simple slow dynamics, analysis of the race to escape amongheterogeneous populations would be straightforward. Some networks may feature solutionsinvolving some transitions by escape and some by release [6], however, and combining both effects, especially with adaptation that allowsslow adjustment of inhibitory strength within phases [28, 29], would be more complicated and remains for future study. Additional studywould also be required to weaken the other assumptions we have made in our analysis. Inparticular, it might be possible to improve the quantitative agreement between ourformulas and the actual slow variable values at jumps, and the actual jumping order forsome parameter sets near transitions between solution types, by no longer treatingsigmoidal activation and coupling functions as step functions; however, it is not clearhow to derive explicit formulas without these approximations. Finally, it would beinteresting to try to generalize our approach to noisy systems. Presumably, thisgeneralization would involve replacing our boundary curves with distributions of jumpingprobabilities defined over regions of each slow variable space, leading toprobabilistically defined jumping orders and mappings between spaces.
Appendix 1: Model details
Parameter values for Equations (1) and (2) and for these additional functions arelisted in Tables 1 and 2. These values werechosen by starting from those in published studies [6, 8] and making changes to achieve interesting dynamics; also, we rescaled thecapacitance C to 1 pF and divided all conductances by its original value, 20,correspondingly. Note that the actual values are not important as long as they give acertain nullcline structure and fast-slow time scale separation, as these do (see thegeneral assumptions in Appendix 2 below).
The parameter values listed in Table 1 for ${\tau}_{a,h}$, ${\tau}_{b,h}$ were used during times when cell 3 was in the active phase and in thesubsequent races, while ${\tau}_{a,h}=5.75$, ${\tau}_{b,h}=-0.75$ were applied during times when cell 4 was active and in the subsequentraces; similarly, ${\sigma}_{L}$ was changed to 1/575 when cell 4 was active. These values of ${\tau}_{a,h}$, ${\tau}_{b,h}$ were obtained from preliminary simulations using a slightly differentform of ${\tau}_{h}(v)$ that had been used in earlier studies [6, 8, 30], which gave qualitatively identical behavior. This original ${\tau}_{h}(v)$ took different values depending on whether cell 3 or cell 4 was activebecause ${v}_{1}$ belonged to different intervals in the two cases. The form of ${\tau}_{h}(v)$ that we adopted, as given in Equation (32), was chosen to unify theform of the equations across all three neurons and to simplify numerical exploration ofparameter space. We note that a change in ${\theta}_{{m}_{p}}$ from −50 to −52 changed the attractor from13231323…to 132313213…as in Figure 6A, although thisparameter set did not give the full range of patterns seen in the other panels of Figure6.
For all panels in Figures 5 and 6, we usedthe parameter set in Table 2, except that we adjusted $({\tau}_{a,2},{\tau}_{b,2},{\tau}_{a,3},{\tau}_{b,3})$ for panels B,C,D. Specifically, we set $({\tau}_{a,2},{\tau}_{b,2},{\tau}_{a,3},{\tau}_{b,3})$ to $(35,7,20,-3)$ in Figures 5B and 6B, $(35,10,20,0)$ in Figures 5C and 6C, and $(35,10,20,-2)$ in Figures 5D and 6D.
Appendix 2: General assumptions
System (1) has certain properties that make it suitable for the analysis that weperform. Given a network of three synaptically coupled elements, our analysis canproceed if the following assumptions on the network and its dynamics are satisfied.
(A1) Each unit in the network consists of a system of two ordinarydifferential equations (ODE), one for the evolution of a fast variable with an $O(1)$ vector field, call it ${f}_{j}$, and one for a slow variable with an $O(\u03f5)$ vector field, ${s}_{j}$, for $j\in \{1,2,3\}$, where ϵ is a small, positive parameter.
(A2) Each unit is coupled to both of the other units in the network. Thecoupling from unit j to unit k appears as a Heaviside step function $H({f}_{j}-{\theta}_{I})$, or a sufficiently steep increasing sigmoidal curve withhalf-activation ${\theta}_{I}$, in the ODE for ${f}_{k}$.
(A3) The fast vector field of each unit is a decreasing function of thestrengths of the inputs that unit receives. Thus, if ${f}_{j}$ decreases through ${\theta}_{I}$, such that the input from unit j to the other units turnsoff, then $d{f}_{k}/dt$ increases for $k\ne j$.
- (a)
if one input to unit j is on (i.e., ${f}_{k}>{\theta}_{I}$ for some $k\ne j$), then:
- (i)
there is a monotone branch ${N}_{j}^{sil}$ of the ${f}_{j}$-nullcline,
- (ii)is defined on an interval ${I}_{j}^{sil}$ satisfying ${f}_{j}<{\theta}_{I}$ for all ${f}_{j}\in {I}_{j}^{sil}$,${N}_{j}^{sil}$
- (iii)intersects the ${s}_{j}$-nullcline in a unique point $({f}_{j}^{\ast},{s}_{j}^{\ast})$, and${N}_{j}^{sil}$
- (iv)when $d{s}_{j}/dt$ is evaluated along ${N}_{j}^{sil}$ with ${f}_{j}<{f}_{j}^{\ast}$;$(d{N}_{j}^{sil}({f}_{j})/df)(d{s}_{j}/dt)>0$
- (b)
if no inputs to unit j are on, then:
- (i)
there is a monotone branch ${N}_{j}^{act}$ of the ${f}_{j}$-nullcline,
- (ii)is defined on an interval ${I}_{j}^{act}$ such that ${\theta}_{I}\in {I}_{j}^{act}$,${N}_{j}^{act}$
- (iii)intersects the ${s}_{j}$-nullcline in a unique point $({f}_{j}^{\ast \ast},{s}_{j}^{\ast \ast})$ with ${f}_{j}^{\ast \ast}<{\theta}_{I}$, and${N}_{j}^{act}$
- (iv)when $d{s}_{j}/dt$ is evaluated along ${N}_{j}^{act}$ with ${f}_{j}>{f}_{j}^{\ast \ast}$.$(d{N}_{j}^{act}({f}_{j})/df)(d{s}_{j}/dt)<0$
For system (1), each v plays the role of the fast variable f from (A1)while the other variable linked to v is the slow variable s. Since ${S}_{\mathrm{\infty}}(v)$ is a Heaviside step function, (A2) holds for system (1), and the factthat all coupling is inhibitory, with a reversal potential less than the range of valuestraversed by each v, means that (A3) is satisfied as well. Assumption (A4),although more complicated than the others, is in fact fairly standard for typical planarneuronal models. This assumption holds, for example, if a unit’sf-nullcline is the graph of a cubic function for all levels of input; if in thepresence of input, the nullcline’s left branch lies below ${\theta}_{I}$ and the unit has a critical point on this branch; and if in theabsence of input, the nullcline’s right branch crosses through ${\theta}_{I}$, with a critical point on this branch having an f-coordinateless than ${\theta}_{I}$. It is easy to choose parameters for the $({v}_{1},h)$ unit in system (1) that meet all of these criteria. The persistentsodium current renders the ${v}_{1}$-nullcline cubic, and we can choose ${\theta}_{I}$ and the parameters of ${h}_{\mathrm{\infty}}$ to achieve the other desired properties, as we do throughout thisarticle. The other two units in the system have monotone v-nullclines becauseeach can be expressed as a graph $(v,m(v))$ where $m(v)$ is the ratio of two linear functions of v. Certain choices of ${\theta}_{I}$ and parameters of ${m}_{\mathrm{\infty}}$, such as those made in this article, ensure that (A4) holds for theseunits as well. We note that the assumptions made about the relations of thef-nullclines to ${\theta}_{I}$ can be weakened as long as ${f}_{j}={\theta}_{I}$ is only achieved when the inputs to unit j are both off.
Declarations
Acknowledgements
This study was partially supported by NSF Awards DMS-1021701 (JR) and DMS-1022627(DT).
Authors’ Affiliations
References
- Rubin J, Terman D: Geometric singular perturbation analysis of neuronal dynamics. 2. In Handbook of Dynamical Systems: Towards Applications. Edited by: Fiedler B. Elsevier, Amsterdam; 2002.Google Scholar
- Ermentrout G, Terman D: Mathematical Foundations of Neuroscience. Springer, New York; 2010.View ArticleGoogle Scholar
- Lindsey B, Rybak I, Smith J: Computational models and emergent properties of respiratory neural networks. Compr Physiol 2012, 2: 1619–1670.Google Scholar
- Rybak I, Abdala A, Markin S, Paton J, Smith J: Spatial organization and state-dependent mechanisms for respiratory rhythm andpattern generation. Prog Brain Res 2007, 165: 201–220.View ArticleGoogle Scholar
- Smith J, Abdala A, Koizumi H, Rybak I, Paton J: Spatial and functional architecture of the mammalian brainstem respiratorynetwork: a hierarchy of three oscillatory mechanisms. J Neurophysiol 2007, 98: 3370–3387. 10.1152/jn.00985.2007View ArticleGoogle Scholar
- Rubin J, Shevtsova NA, Ermentrout GB, Smith JC, Rybak IA: Multiple rhythmic states in a model of the respiratory central patterngenerator. J Neurophysiol 2009, 101: 2146–2165. 10.1152/jn.90958.2008View ArticleGoogle Scholar
- Molkov Y, Abdala A, Bacak B, Smith J, Rybak I, Paton J: Late-expiratory activity: emergence and interactions with respiratory CPG. J Neurophysiol 2010, 104: 2713–2729. 10.1152/jn.00334.2010View ArticleGoogle Scholar
- Rubin J, Bacak B, Molkov Y, Shevtsova N, Rybak I, Smith J: Interacting oscillations in neural control of breathing: modeling and quantitativeanalysis. J Comput Neurosci 2011, 30: 607–632. 10.1007/s10827-010-0281-0MathSciNetView ArticleGoogle Scholar
- Ben-Tal A, Smith J: A model for control of breathing in mammals: coupling neural dynamics toperipheral gas exchange and transport. J Theor Biol 2008, 251: 480–497. 10.1016/j.jtbi.2007.12.018MathSciNetView ArticleGoogle Scholar
- Butera R, Rinzel J, Smith J: Models of respiratory rhythm generation in the pre-Bötzinger complex. I.Bursting pacemaker neurons. J Neurophysiol 1999, 81: 382–397.Google Scholar
- DelNegro C, Johnson S, Butera R, Smith J: Models of respiratory rhythm generation in the pre-Bötzinger complex. III.Experimental tests of model predictions. J Neurophysiol 2001, 86: 59–74.Google Scholar
- Rybak I, Shevtsova N, Ptak K, McCrimmon D: Intrinsic bursting activity in the pre-Bötzinger complex: role of persistentsodium and potassium currents. Biol Cybern 2004, 90: 59–74. 10.1007/s00422-003-0447-1View ArticleGoogle Scholar
- Feldman J, DelNegro C: Looking for inspiration: new perspectives on respiratory rhythm. Nat Rev, Neurosci 2006, 7: 232–242. 10.1038/nrn1871View ArticleGoogle Scholar
- Wang XJ, Rinzel J: Alternating and synchronous rhythms in reciprocally inhibitory model neurons. Neural Comput 1992, 4: 84–97. 10.1162/neco.1992.4.1.84View ArticleGoogle Scholar
- Golomb D, Wang X, Rinzel J: Synchronization properties of spindle oscillations in a thalamic reticular nucleusmodel. J Neurophysiol 1994, 72: 1109–1126.Google Scholar
- White J, Chow C, Ritt J, Soto-Trevino C, Kopell N: Dynamics in heterogeneous, mutually inhibited neurons. J Comput Neurosci 1998, 5: 5–16. 10.1023/A:1008841325921View ArticleGoogle Scholar
- Whittington M, Traub R, Kopell N, Ermentrout B, Buhl E: Inhibition-based rhythms: experimental and mathematical observations on networkdynamics. Int J Psychophysiol 2000, 38: 315–336. 10.1016/S0167-8760(00)00173-2View ArticleGoogle Scholar
- Kopell N, Börgers C, Pervouchine D, Malerba P, Tort A: Gamma and theta rhythms in biophysical models of hippocampal circuits. Springer Series in Computational Neuroscience 5. In Hippocampal Microcircuits. Edited by: Cutsuridis V, Graham B, Cobb S, Vida I. Springer, New York; 2010:423–457.View ArticleGoogle Scholar
- Shilnikov A, Belykh I, Gordon R: Polyrhythmic synchronization in bursting networking motifs. Chaos 2008.,18(3): Article ID 037120 Article ID 037120
- Matveev V, Bose A, Nadim F: Capturing the bursting dynamics of a two-cell inhibitory network using aone-dimensional map. J Comput Neurosci 2007, 23: 169–187. doi:10.1007/s10827–007–0026-x doi:10.1007/s10827-007-0026-x 10.1007/s10827-007-0026-xMathSciNetView ArticleGoogle Scholar
- Chandrasekaran L, Matveev V, Bose A: Multistability of clustered states in a globally inhibitory network. Physica D 2009, 238: 253–263. 10.1016/j.physd.2008.10.008MathSciNetView ArticleGoogle Scholar
- Wojcik J, Clewley R, Shilnikov A: Order parameter for bursting polyrhythms in multifunctional central patterngenerators. Phys Rev E 2011., 83: Article ID 056209 Article ID 056209Google Scholar
- Terman D, Lee E: Partial synchronization in a network of neural oscillators. SIAM J Appl Math 1997, 57: 252–293. 10.1137/S0036139994278925MathSciNetView ArticleGoogle Scholar
- Terman D, Wang D: Global competition and local cooperation in a network of neural oscillators. Physica D 1995, 81: 148–176. 10.1016/0167-2789(94)00205-5MathSciNetView ArticleGoogle Scholar
- Rubin J, Terman D: Analysis of clustered firing patterns in synaptically coupled networks ofoscillators. J Math Biol 2000, 41: 513–545. 10.1007/s002850000065MathSciNetView ArticleGoogle Scholar
- Terman D, Kopell N, Bose A: Dynamics of two mutually coupled inhibitory neurons. Physica D 1998, 117: 241–275. 10.1016/S0167-2789(97)00312-6MathSciNetView ArticleGoogle Scholar
- Rubin J, Terman D: Synchronized bursts and loss of synchrony among heterogeneous conditionaloscillators. SIAM J Appl Dyn Syst 2002, 1: 146–174. 10.1137/S111111110240323XMathSciNetView ArticleGoogle Scholar
- Daun S, Rubin JE, Rybak IA: Control of oscillation periods and phase durations in half-center central patterngenerators: a comparative mechanistic analysis. J Comput Neurosci 2009, 27: 3–36. 10.1007/s10827-008-0124-4MathSciNetView ArticleGoogle Scholar
- Tabak J, O’Donovan M, Rinzel J: Differential control of active and silent phases in relaxation models of neuronalrhythms. J Comput Neurosci 2006, 21: 307–328. doi:10.1007/s10827–006–8862–7 doi:10.1007/s10827-006-8862-7 10.1007/s10827-006-8862-7MathSciNetView ArticleGoogle Scholar
- Butera R, Rinzel J, Smith J: Models of respiratory rhythm generation in the pre-Bötzinger complex. II.Populations of coupled pacemaker neurons. J Neurophysiol 1999, 81: 398–415.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative CommonsAttribution License (http://creativecommons.org/licenses/by/2.0), whichpermits unrestricted use, distribution, and reproduction in any medium, provided theoriginal work is properly cited.