 Review
 Open
 Published:
Stochastic Hybrid Systems in Cellular Neuroscience
The Journal of Mathematical Neurosciencevolume 8, Article number: 12 (2018)
Abstract
We review recent work on the theory and applications of stochastic hybrid systems in cellular neuroscience. A stochastic hybrid system or piecewise deterministic Markov process involves the coupling between a piecewise deterministic differential equation and a timehomogeneous Markov chain on some discrete space. The latter typically represents some random switching process. We begin by summarizing the basic theory of stochastic hybrid systems, including various approximation schemes in the fast switching (weak noise) limit. In subsequent sections, we consider various applications of stochastic hybrid systems, including stochastic ion channels and membrane voltage fluctuations, stochastic gap junctions and diffusion in randomly switching environments, and intracellular transport in axons and dendrites. Finally, we describe recent work on phase reduction methods for stochastic hybrid limit cycle oscillators.
Introduction
There are a growing number of problems in cell biology that involve the coupling between a piecewise deterministic differential equation and a timehomogeneous Markov chain on some discrete space Γ, resulting in a stochastic hybrid system, also known as a piecewise deterministic Markov process (PDMP) [37]. Typically, the phase space of the dynamical system is taken to be $\mathbb {R}^{d}$ for finite d. One important example at the singlecell level is the occurrence of membrane voltage fluctuations in neurons due to the stochastic opening and closing of ion channels [2, 25, 30, 32, 54, 64, 80, 109, 114, 117]. Here the discrete states of the ion channels evolve according to a continuoustime Markov process with voltagedependent transition rates and, inbetween discrete jumps in the ion channel states, the membrane voltage evolves according to a deterministic equation that depends on the current state of the ion channels. In the limit that the number of ion channels goes to infinity, we can apply the law of large numbers and recover classical Hodgkin–Huxleytype equations. However, finitesize effects can result in the noiseinduced spontaneous firing of a neuron due to channel fluctuations. Another important example is a gene regulatory network, where the continuous variable is the concentration of a protein product, and the discrete variable represents the activation state of the gene [79, 83, 108, 110, 131]. Stochastic switching between active and inactive gene states can allow a gene regulatory network to switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noiseinduced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level, then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. A third example occurs within the context of motordriven intracellular transport [23]. One often finds that motorcargo complexes randomly switch between different velocity states such as anterograde versus retrograde motion, which can be modeled in terms of a special type of stochastic hybrid system known as a velocity jump process.
In many of the examples mentioned, we find that the transition rates between the discrete states $n\in\varGamma$ are much faster than the relaxation rates of the piecewise deterministic dynamics for $x\in \mathbb {R}^{d}$. Thus, there is a separation of timescales between the discrete and continuous processes, so that if t is the characteristic timescale of the relaxation dynamics, then εt is the characteristic timescale of the Markov chain for some small positive parameter ε. Assuming that the Markov chain is ergodic, in the limit $\varepsilon \rightarrow0$, we obtain a deterministic dynamical system in which one averages the piecewise dynamics with respect to the corresponding unique stationary measure. This then raises the important problem of characterizing how the law of the underlying stochastic process approaches this deterministic limit in the case of weak noise, $0<\varepsilon \ll1$.
The notion of a stochastic hybrid system can also be extended to piecewise deterministic partial differential equations (PDEs), that is, infinitedimensional dynamical systems. One example concerns molecular diffusion in cellular and subcellular domains with randomly switching exterior or interior boundaries [12, 17–19, 92]. The latter are generated by the random opening and closing of gates (ion channels or gap junctions) within the plasma membrane. In this case, we have a diffusion equation with boundary conditions that depend on the current discrete states of the gates; the particle concentration thus evolves piecewise, in between the opening or closing of a gate. One way to analyze these stochastic hybrid PDEs is to discretize space using finitedifferences (method of lines) so that we have a standard PDMP on a finitedimensional space. Diffusion in randomly switching environments also has applications to the branched network of tracheal tubes forming the passive respiration system in insects [18, 92] and volume neurotransmission [90].
This tutorial review develops the theory and applications of stochastic hybrid systems within the context of cellular neuroscience. A complementary review that mainly considers gene regulatory networks can be found elsewhere [14]. In Sect. 2, we summarize the basic theory of stochastic hybrid systems, In subsequent sections, we consider various applications of stochastic hybrid systems, including stochastic ion channels and membrane voltage fluctuations (Sect. 3), stochastic gap junctions and diffusion in randomly switching environments (Sect. 4), and intracellular transport in axons and dendrites (Sect. 5). Finally, in Sect. 6, we present recent work on phase reduction methods for stochastic hybrid limit cycle oscillators.
Stochastic Hybrid Systems
In this section, we review the basic theory of stochastic hybrid systems. We start with the notion of a piecewise deterministic differential equation, which can be used to generate sample paths of the stochastic process. We then describe how the probability distribution of sample paths can be determined by solving a differential Chapman–Kolmogorov (CK) equation (Sect. 2.1). In many applications, including the stochastic ion channel models of Sect. 3, there is a separation of timescales between a fast $O(1/\varepsilon )$ switching process and a slow $O(1)$ continuous dynamics. In the fast switching limit $\varepsilon \rightarrow0$, we obtain a deterministic dynamical system. In Sect. 2.2, we use an asymptotic expansion in ε to show how the CK equation can be approximated by the Fokker–Planck (FP) equation with an $O(\varepsilon )$ diffusion term (Sect. 2.2). Finally, in Sect. 2.3, we consider methods for analyzing escape problems in stochastic hybrid systems. We assume that the deterministic system is bistable so that, in the absence of noise, the longtime stable state of the system depends on the initial conditions. On the other hand, for finite switching rates, the resulting fluctuations can induce transitions between the metastable states. In the case of weak noise (fast switching $0 <\varepsilon \ll1$), transitions are rare events involving large fluctuations that are in the tails of the underlying probability density function. This means that estimates of mean first passage times (MFPTs) and other statistical quantities can develop exponentially large errors under the diffusion approximation. We describe a more accurate method for calculating MFPTs based on a WKB analysis.
We begin with the definition of a stochastic hybrid system and, in particular, a piecewise deterministic Markov process (PDMP) [37, 53, 84]. For illustration, consider a system whose states are described by a pair $(x,n) \in\varSigma\times\{0,\ldots,N_{0}1\}$, where x is a continuous variable in a connected bounded domain $\varSigma\subset \mathbb {R}^{d}$ with regular boundary ∂Ω, and n is a discrete stochastic variable taking values in the finite set $\varGamma\equiv\{ 0,\ldots,N_{0}1\}$. (It is possible to have a set of discrete variables, although we can always relabel the internal states so that they are effectively indexed by a single integer. We can also consider generalizations of the continuous process, in which the ODE (2.1) is replaced by a stochastic differential equation (SDE) or even a partial differential equation (PDE). To allow for such possibilities, we will refer to all of these processes as examples of a stochastic hybrid system.) When the internal state is n, the system evolves according to the ordinary differential equation (ODE)
where the vector field $F_{n}: \mathbb {R}\to \mathbb {R}$ is a continuous function, locally Lipschitz. That is, given a compact subset $\mathscr {K}$ of Σ, there exists a positive constant $K_{n}$ such that
We assume that the dynamics of x is confined to the domain Σ so that existence and uniqueness of a trajectory holds for each n. For fixed x, the discrete stochastic variable evolves according to a homogeneous continuoustime Markov chain with transition matrix $\mathbf{W}(x)$ and corresponding generator $\mathbf{A}(x)$, which are related according to
The matrix $\mathbf{A}(x)$ is also taken to be Lipschitz. We make the further assumption that the chain is irreducible for all $x\in\varSigma$, that is, for fixed x, there is a nonzero probability of transitioning, possibly in more than one step, from any state to any other state of the Markov chain. This implies the existence of a unique invariant probability distribution on Γ for fixed $x\in\varSigma $, denoted by $\rho(x)$, such that
Let us decompose the transition matrix of the Markov chain as
with $\sum_{n\neq m}P_{nm}(x)=1$ for all x. Hence $\lambda_{m}(x)$ determines the jump times from the state m, whereas $P_{nm}(x)$ determines the probability distribution that when it jumps, the new state is n for $n\neq m$. The hybrid evolution of the system with respect to $x(t)$ and $n(t)$ can then be described as follows; see Fig. 1. Suppose the system starts at time zero in the state $(x_{0}, n_{0})$. Call $x_{0}(t)$ the solution of (2.1) with $n=n_{0}$ such that $x_{0}(0)=x_{0}$. Let $t_{1}$ be the random variable (stopping time) such that
Then in the random time interval $s\in[0, t_{1})$ the state of the system is $(x_{0}(s),n_{0})$. Now draw a value of $t_{1}$ from $\mathbb {P}(t_{1} < t)$, choose an internal state $n_{1} \in\varGamma$ with probability $P_{n_{1}n_{0}}(x_{0}(t_{1}))$, and call $x_{1}(t)$ the solution of the following Cauchy problem on $[t_{1},\infty)$:
Iterating this procedure, we can construct a sequence of increasing jumping times $(t_{k})_{k \geq0}$ (setting $t_{0}=0$) and a corresponding sequence of internal states $(n_{k})_{k \geq0}$. The evolution $(x(t), n(t))$ is then defined as
Note that the path $x(t)$ is continuous and piecewise $C^{1}$. To have a welldefined dynamics on $[0,T]$, it is necessary that almost surely the system makes a finite number of jumps in the time interval $[0,T]$. This is guaranteed in our case. This formulation is the basis of a simulation algorithm for PDMPs [2, 150].
Chapman–Kolmogorov Equation
Let $X(t)$ and $N(t)$ denote the stochastic continuous and discrete variables, respectively, at time t, $t>0$, given the initial conditions $X(0)=x_{0}$, $N(0)=n_{0}$. Introduce the probability density $p_{n}(x,tx_{0},n_{0},0) $ with
It follows that p evolves according to the forward differential Chapman–Kolmogorov (CK) equation [10, 61]
For notational convenience, we have dropped the explicit dependence on initial conditions. The first term on the righthand side represents the probability flow associated with the piecewise deterministic dynamics for a given n, whereas the second term represents jumps in the discrete state n. Note that we have rescaled the matrix A by introducing the dimensionless parameter $\varepsilon>0$. This is motivated by the observation that one often finds a separation of timescales between the relaxation time for the dynamics of the continuous variable x and the rate of switching between the different discrete states n. The fast switching limit then corresponds to the case $\varepsilon \rightarrow0$. Let us now define the averaged vector field $\overline {F}: \mathbb {R}^{d} \to \mathbb {R}^{d}$ by
Intuitively speaking, we would expect the stochastic hybrid system (2.1) to reduce to the deterministic dynamical system
in the fast switching limit $\varepsilon\rightarrow0$. That is, for sufficiently small ε, the Markov chain undergoes many jumps over a small time interval Δt during which $\varDelta x\approx0$, and thus the relative frequency of each discrete state n is approximately $p_{n}^{*}(x)$. This can be made precise in terms of a law of large numbers for stochastic hybrid systems [51, 84].
It remains to specify boundary conditions for the CK equation. For illustration, suppose that $d=1$ (onedimensional continuous dynamics) with $\varSigma=[0,L]$ and assume that there exists an integer m, $1\leq m \leq N_{0}1$, such that $F_{n}(0)=0$ for $0\leq n \leq m1$ and $F_{n}(L)=0$ for $m\leq n\leq N_{0}1$. Noflux boundary conditions at the ends $x=0,L$ take the form $J(0,t)=J(L,t)=0$ with
It follows that $p_{n}(0,t)=0$ for $m\leq n\leq N_{0}1$ and $p_{n}(L,t)=0$ for $0\leq n\leq m1$. In the analysis of metastability (Sect. 2.3), it will be necessary to impose an absorbing boundary condition at some interior point $x_{*}$ of the domain Σ, that is,
In contrast to the noflux conditions, there are nonzero fluxes through $x_{*}$.
In general, it is difficult to obtain an analytical steadystate solution of (2.6), assuming that it exists, unless $d=1$ and $N_{0}=2$ [46, 79]. The onedimensional CK equation takes the form
In the twostate case ($N_{0}=2$),
for a pair of transition rates $\alpha(x)$, $\beta(x)$, so that the steadystate version of (2.10) reduces to the pair of equations
Adding the pair of equations yields
that is,
for some constant c. The reflecting boundary conditions imply that $c=0$. Since $F_{n}(x)$ is nonzero for all $x\in\varSigma$, we can express $p_{1}(x)$ in terms of $p_{0}(x)$:
Substituting into equation (2.11) gives
This yields the solutions
where $x_{*}\in\varSigma$ is arbitrary and assuming that the normalization factor Z exists.
QuasiSteadyState (QSS) Diffusion Approximation
For small but nonzero ε, we can use perturbation theory to derive lowest order corrections to the deterministic mean field equation, which leads to the Langevin equation with noise amplitude $O(\sqrt{\varepsilon})$. More specifically, perturbations of the meanfield equation (2.8) can be analyzed using a quasisteadystate (QSS) diffusion or adiabatic approximation, in which the CK equation (2.6) is approximated by the Fokker–Planck (FP) equation for the total density $C(x,t)=\sum_{n} p_{n}(x,t)$. The QSS approximation was first developed from a probabilistic perspective by Papanicolaou [119]. It has subsequently been applied to a wide range of problems in biology, including models of intracellular transport in axons [57, 123] and dendrites [111–113] and bacterial chemotaxis [73, 74, 116]. There have also been more recent probabilistic treatments of the adiabatic limit, which have been applied to various stochastic neuron models [118]. Finally, note that it is also possible to obtain a diffusion limit by taking the number of discrete states $N_{0}$ to be large [30, 117].
The basic steps of the QSS reduction are as follows:
(a) Decompose the probability density as
where $\sum_{n} p_{n}(x,t) =C(x,t)$ is the marginal probability density for the continuous variables x, and $\sum_{n} w_{n}(x,t)=0$. Substituting into equation (2.6) yields
Summing both sides with respect to n then gives
where $\overline{F}(x)$ is the mean vector field of equation (2.7).
(b) Using the equation for C and the fact that $\mathbf{A}(x)\rho(x) = 0$, we have
(c) Introduce the asymptotic expansion
and collect $O(1)$ terms:
The Fredholm alternative theorem (see the end of Sect. 2.3) shows that this has a solution, which is unique on imposing the condition $\sum_{n} w^{(0)}_{n}(x,t)=0$:
where $\mathbf{A}^{\dagger}$ is the pseudoinverse of the generator A. We typically have to determine the pseudoinverse of A numerically.
(d) Combining equations (2.19) and (2.17) shows that C evolves according to the Itô Fokker–Planck (FP) equation
where the $O(\varepsilon )$ correction to the drift, ${\mathscr {V}}(x)$, and the diffusion matrix ${D}(x)$ are given by
and
Since $\sum_{m}A_{mn}^{\dagger}=0$, we can rewrite the diffusion matrix as
In the onedimensional case, the CK equation (2.10) reduces to the onedimensional Itô FP equation
with the diffusion coefficient $D(x)$ given by
where $Z_{n}(x)$ is the unique solution to
For $N_{0}>2$, we typically have to solve equation (2.24) numerically in order to find the pseudoinverse of A. However, in the special case of a twostate discrete process ($n=0,1$), we have the explicit solution
At a fixed point $x_{*}$ of the deterministic equation $\dot {x}=\overline{F}(x)$, we have $\overline{F}(x_{*})=0$ and $\beta (x_{*})F_{0}(x_{*})=\alpha(x_{*})F_{1}(x_{*})$. This gives the reduced expression
One subtle point is the nature of boundary conditions under the QSS reduction, since the FP equation is a secondorder parabolic PDE, whereas the original CK equation is an $N_{0}$thorder hyperbolic PDE. It follows that, for $N_{0}>2$, there is a mismatch in the number of boundary conditions between the CK and FP equations. This implies that the QSS reduction may break down in a small neighborhood of the boundary, as reflected by the existence of boundary layers [152]. One way to eliminate the existence of boundary layers is to ensure that the boundary conditions of the CK equation are compatible with the QSS reduction.
Metastability in Stochastic Hybrid Systems
Several examples of stochastic hybrid systems are known to exhibit multistability in the fastswitching limit $\varepsilon \rightarrow0$ [14]. That is, the deterministic equation (2.8) supports more than one stable equilibrium. In the absence of noise, the particular state of the system is determined by initial conditions. On the other hand, when noise is included by taking into account the stochastic switching, fluctuations can induce transitions between the metastable states. If the noise is weak (fast switching $0 <\varepsilon \ll1$), then transitions are rare events involving large fluctuations that are in the tails of the underlying probability density function. This means that estimates of mean transition times and other statistical quantities can be sensitive to any approximations, including the Gaussian approximation based on the QSS approximation of Sect. 2.3, and can sometimes lead to exponentially large errors.
The analysis of metastability has a long history [70], particularly within the context of SDEs with weak noise. The underlying idea is that the mean rate to transition from a metastable state in the weak noise limit can be identified with the principal eigenvalue of the generator of the underlying stochastic process, which is a secondorder differential operator in the case of a Fokker–Planck equation. Calculating the eigenvalue typically involves obtaining a Wentzel–Kramers–Brillouin (WKB) approximation of a quasistationary solution and then using singular perturbation theory to match the solution to an absorbing boundary condition [69, 97, 99, 103, 130]. The latter is defined on the boundary that marks the region beyond which the system rapidly relaxes to another metastable state, becomes extinct, or escapes to infinity. In onedimensional systems ($d=1$), this boundary is simply an unstable fixed point, whereas in higherdimensions ($d>1$), it is generically a $(d1)$submanifold. In the weak noise limit, the most likely paths of escape through an absorbing boundary are rare events, occurring in the tails of the associated functional probability distribution. From a mathematical perspective, the rigorous analysis of the tails of a distribution is known as large deviation theory [39, 53, 55, 138], which provides a rigorous probabilistic framework for interpreting the WKB solution in terms of optimal fluctuational paths. The analysis of metastability in chemical master equations has been developed along analogous lines to SDEs, combining WKB methods and large deviation principles[43, 45, 49, 53, 69, 75, 85, 124] with pathintegral or operator methods [40, 41, 121, 128, 143]. The study of metastability in stochastic hybrid systems is more recent, and much of the theory has been developed in a series of papers on stochastic ion channels [25, 109, 114, 115], gene networks [108, 110], and stochastic neural networks [24]. Again there is a strong connection between WKB methods, large deviation principles [15, 51, 84], and formal pathintegral methods [11, 26], although the connection is now more subtle.
For illustration, we will focus on a onedimensional stochastic hybrid system and develop the theory using WKB methods. First, suppose that the deterministic equation (2.8) is written as
with the potential $U(x)$ having two minima (stable equilibria) separated by a single maximum (unstable equilibrium), as illustrated in Fig. 2. To calculate the mean escape rate from the metastable state $x_{}$, say, the CK equation (2.6) is supplemented by an absorbing boundary condition at $x=x_{0}$. The initial condition is taken to be $p_{n}(x,0y,0)=\delta(xy)\rho _{n}(y)$, where y is in a neighborhood of $x_{}$, and $\rho_{n}(y)$ is the stationary distribution of the switching process. Let $T(y)$ denote the (stochastic) first passage time for which the system first reaches $x_{0}$, given that it started at y. The distribution of first passage times $f(t,y)$ is related to the survival probability that the system has not yet reached $x_{0}$:
That is, $\operatorname{Prob}\{t>T X(0)=y\} =S(y,t)$, and the first passage time density $f(y,t)=\partial S/\partial t$. Substituting for $\partial p_{n}/\partial t$ using the CK equation (2.10) shows that
with $\varGamma=\{0,1\}$ for the twostate model. We have used $\sum_{n \in\varGamma}{A}_{nm}(x)=0$ and the asymptotic limit $F_{n}(x)p_{n}(x,ty,0)\rightarrow0$ as $x\rightarrow\pm\infty$. The mean first passage time (MFPT) $\tau(y)$ is then given by
It turns out that for small ε, the MFPT has an Arrheniuslike form analogous to SDEs [69]:
where $\varPhi(x)$ is known as the quasipotential or stochastic potential, and Γ is a prefactor. One important observation is that the escape time is exponentially sensitive to the precise form of Φ. If we were first to carry out the QSS reduction of Sect. 2.3 and then use a standard analysis of the onedimensional FP equation in order to estimate the MFPT [61], then we would find that $\varGamma=1$ and, to $O(1)$,
with $D(x)$ given by equation (2.25). In particular, if $D(x)$ is independent of x, then $\varPhi(x)=U(x)/D$ with $U(x)$ the deterministic potential. The escape time then depends on the barrier height ΔE shown in Fig. 2. As we have already commented, the Gaussian approximation may not accurately capture the statistics of rare events that dominate noiseinduced escape. This is reflected by the observation that $\varPhi_{\mathrm{QSS}}(x)$ can differ significantly from the true quasipotential. A much better estimate can be obtained using WKB.
To apply the WKB method, we can exploit the fact that, in the weak noise limit ($\varepsilon \ll1$), the flux through the absorbing boundary is exponentially small. This has major implications for the spectral decomposition of the solution to the CK equation with an absorbing boundary at $x=x_{0}$. More specifically, consider the eigenfunction expansion
where $(\lambda_{\varepsilon }^{(r)},\phi_{\varepsilon }^{(r)}(x))$ is an eigenpair of the matrixvalued linear operator
appearing on the righthand side of (2.6), that is,
together with the absorbing boundary conditions ${\phi}^{(r)}_{\varepsilon }(x_{0},n)=0 $ for all n. We also assume that the eigenvalues $\lambda_{\varepsilon }^{(r)}$ all have positive definite real parts and the smallest eigenvalue $\lambda_{\varepsilon}^{(0)}$ is real and simple, so that we can introduce the ordering $0<\lambda _{\varepsilon }^{(0)}<\operatorname{Re}[\lambda_{\varepsilon }^{(1)}]\leq\operatorname {Re}[\lambda_{\varepsilon }^{(2)}]\leq\cdots$. The exponentially slow rate of escape through $x_{0}$ in the weaknoise limit means that $\lambda_{\varepsilon }^{(0)}$ is exponentially small, $\lambda _{\varepsilon }^{(0)}\sim\mathrm{e}^{C/\varepsilon }$, whereas $\operatorname {Re}[\lambda_{\varepsilon }^{(r)}]$ is only weakly dependent on ε for $r\geq1$. Under these assumptions, we have the quasistationary approximation for large t:
Substituting such an approximation into equation (2.29) and suppressing the initial conditions give
and thus
Since $\lambda_{\varepsilon }^{(0)}$ is exponentially small, we can take the quasistationary solution ${ \phi}_{\varepsilon }^{(0)}$ to satisfy the timeindependent CK equation. We then seek a WKB approximation of the quasistationary solution by making the ansatz
where $\varPhi(x)$ is the WKB quasipotential. Substituting into the timeindependent version of equation (2.10) yields
where $\varPhi'=d\varPhi/dx$. Introducing the asymptotic expansions $\varPhi\sim\varPhi_{0}+\varepsilon \varPhi_{1}$ and $Z\sim Z^{(0)}+\varepsilon Z^{(1)}$, the leading order equation is
Positivity of the quasistationary density $\phi_{\varepsilon}^{(0)}$ requires positivity of the corresponding solution $\mathbf{Z}^{(0)}$. One positive solution is the trivial solution $\mathbf{Z}^{(0)}(x)=\rho(x)$ for all $x\in\varSigma$, where ρ is the unique right eigenvector of A, for which $\varPhi_{0}'=0$. Establishing the existence of a nontrivial positive solution requires more work and is related to the fact that the connection of the WKB solution to optimal fluctuational paths and large deviation principles is less direct in the case of stochastic hybrid systems.
It turns out that we have to consider the eigenvalue problem [11, 15, 25, 51, 84]
Assuming that $\mathbf{A}(x)$ is irreducible for all x, we can use the Perron–Frobenius theorem (see the end of Sect. 2.3) to show that, for fixed $(x,q)$, there exists a unique eigenvalue $\varLambda_{0}(x, q)$ with a positive eigenvector $R_{n}^{(0)}(x, q)$. The optimal fluctuational paths are obtained by identifying the Perron eigenvalue $\varLambda_{0}(x, q)$ as a Hamiltonian and finding zero energy solutions to Hamilton’s equations
This can be established using large deviation theory or pathintegrals. In the latter case, we can show that a pathintegral representation of the density $p(x,\tau)$ is
for some appropriate measure ${\mathscr {D}}[q,x]$. Applying steepest descents to the path integral then yields a variational principle in which optimal paths minimize the action
Comparison of equation (2.38) with equation (2.39) then shows that there exists a nontrivial positive solution of equation (2.38) given by $Z_{n}^{(0)}(x)=R_{n}^{(0)}(x, q)$ with $q=\varPhi _{0}'(x)$ and $\varPhi_{0}$ satisfies the corresponding Hamilton–Jacobi equation
Note that since $\varPhi'_{0}(x)$ vanishes at $x=x_{0}$, it follows that $\mathbf{Z}^{(0)}(x_{0})=\rho(x_{0}) $, and similarly for the other fixed points. Deterministic mean field equations and optimal paths of escape from a metastable state both correspond to zero energy solutions. Along zeroenergy paths,
Calculation of Principal Eigenvalue
To calculate the principal eigenvalue, it is necessary to determine the firstorder correction $\varPhi_{1}$ to the quasipotential of the WKB solution (2.36). Proceeding to the next order in the asymptotic expansion of equation (2.37), we have
For fixed x and WKB potential $\varPhi_{0}$, the matrix operator
on the lefthand side of this equation has a onedimensional null space spanned by the positive WKB solution $\mathbf{Z}^{(0)}(x)$. The Fredholm alternative theorem (see Sect. 2.2) then implies that the righthand side of (2.42) is orthogonal to the left null vector S of Ā. That is, we have the solvability condition
with S satisfying
Given $\mathbf{Z}^{(0)},\mathbf{S}$, and $\varPhi_{0}$, the solvability condition yields the following equation for $\varPhi_{1}$:
Combining the various results and defining
give to leading order in ε,
where we choose $\sum_{n} Z_{n}^{(0)}(x)=1$ for all x, and ${\mathscr {N}}$ is the normalization factor,
The latter can be approximated using Laplace’s method to give
The final step is to use singular perturbation theory to match the outer quasistationary solution to the absorbing boundary condition at $x_{0}$. The analysis is quite involved [80, 108], so here we simply quote the result for the 1D model:
with $D(x)$ the effective diffusion coefficient (2.23) obtained using a QSS reduction.
TwoState Model
We now illustrate the above theory for the simple twostate model of equation (2.10). The specific version of the linear equation (2.39) can be written as the twodimensional system
The corresponding characteristic equation is
It follows that the Perron eigenvalue is given by
where
and
A little algebra shows that
so that, as expected, $\varLambda_{0}$ is real. The quasipotential $\varPhi _{0}(x)$ satisfies the HJ equation $\varLambda_{0}(x,q)=0$ with $q=\varPhi _{0}'(x)$, which reduces to the conditions
This has two solutions: the classical deterministic solution $q=0$ with $\varPhi_{0}'(x)=0$ and a nontrivial solution whose quasipotential satisfies
(Note that $F_{n}(x)$ does not vanish anywhere and $F_{0}(x)F_{1}(x)<0$.) The quasipotential can be determined by numerically integrating with respect to x. The resulting quasipotential differs significantly from the one obtained by carrying out a QSS diffusion approximation of the stochastic hybrid system along the lines outlined in Sect. 2.2.
For this simple model, it is also straightforward to determine the various prefactors in equation (2.48). For example, the normalized positive eigenvector $\mathbf{Z}^{(0)}$ has the components
Since $F_{0}(x)<0$ and $F_{1}(x)>0$ for $x\in\varSigma$, it follows from equation (2.52) that $Z_{0}^{(0)}$ is positive. The components of the adjoint eigenvector S satisfy
It then follows from equation (2.44) that the first correction to the quasipotential satisfies
Hence
Finally, $D(x_{0})$ is given by equation (2.26).
Fredholm Alternative Theorem
Consider an Mdimensional linear inhomogeneous equation $\mathbf{A}\mathbf{z}=\mathbf{b}$ with $\mathbf{z},\mathbf{b}\in{\mathbb {R}}^{M}$. Suppose that the $M\times M$ matrix A has a nontrivial nullspace and let u be a null vector of the adjoint matrix $\mathbf{A}^{\dagger}$, that is, $\mathbf{A}^{\dagger}\mathbf{u}=0$. The Fredholm alternative theorem for finitedimensional vector spaces states that the inhomogeneous equation has a (nonunique) solution for z if and only if $\mathbf{u}\cdot \mathbf{b}=0$ for all null vectors u. Let us apply this theorem to equation (2.18) for fixed x, t. The onedimensional nullspace is spanned by the vector with components $u_{n}=1$, since $\sum_{n}u_{n}A_{nm}=\sum_{n}A^{\dagger}_{mn}u_{n}=0$. Hence equation (2.18) has a solution, provided that
This immediately follows since $\sum_{n}p_{n}(x)=1$ and $\sum_{n}p_{n}^{*}(x)F_{n}(x)= \overline{F}(x)$ for all x.
Perron–Frobenius Theorem
If T is an irreducible positive finite matrix, then

1.
there is a simple eigenvalue $\lambda_{0}$ of T that is real and positive, with positive left and right eigenvectors;

2.
the remaining eigenvalues λ satisfy $\lambda<\lambda_{0}$.
If $T_{nm}=W_{nm}/\sum_{k}W_{km}$, then $\lambda_{0}=1$, where W is an irreducible transition matrix, then the left positive eigenvector is $\psi=(1,\ldots,1)$, and the right positive eigenvector is the stationary distribution ρ. In the case of the matrix operator $\mathbf{L}(x)$ with components $L_{nm}(x):=A_{nm}(x)+qF_{n}(x)\delta _{n,m}$, which appears in the eigenvalue equation (2.39), it is clear that not all components of the matrix are positive for a given $x\in\varSigma$. However, taking $\zeta>\sup_{x\in\varSigma}\\mathbf{L}(x)\_{\infty}$, the matrix $\mathbf{L}(x)+\zeta\mathbf{I}$ satisfies the conditions of the Perron–Frobenius theorem for all $x\in\varSigma$.
Stochastic Ion Channels and Membrane Voltage Fluctuations
The generation and propagation of a neuronal action potential arises from nonlinearities associated with active membrane conductances. Ions can diffuse in and out of the cell through ion specific channels embedded in the cell membrane; see Fig. 3. Ion pumps within the cell membrane maintain concentration gradients such that there is a higher concentration of Na^{+} and Ca^{2+} outside the cell and a higher concentration of K^{+} inside the cell. The membrane current through a specific channel varies approximately linearly with changes in the voltage v relative to some equilibrium or reversal potential, which is the potential at which there is a balance between the opposing effects of diffusion and electrical forces. (We will focus on a spaceclamped model of a neuron whose cell body is taken to be an isopotential.) Summing over all channel types, the total membrane current (flow of positive ions) leaving the cell through the cell membrane is
where $g_{s}$ is the conductance due to channels of type s, and $V_{s}$ is the corresponding reversal potential.
Recordings of the current flowing through single channels indicate that channels fluctuate rapidly between open and closed states in a stochastic fashion. Nevertheless, most models of a neuron use deterministic descriptions of conductance changes, under the assumption that there are a large number of approximately independent channels of each type. It then follows from the law of large numbers that the fraction of channels open at any given time is approximately equal to the probability that any one channel is in an open state. The conductance $g_{s}$ for ion channels of type s is thus taken to be the product $g_{s}=\bar{g}_{s} P_{s}$ where $\bar{g}_{s}$ is equal to the density of channels in the membrane multiplied by the conductance of a single channel, and $P_{s}$ is the fraction of open channels. The voltagedependence of the probabilities $P_{s}$ in the case of a delayedrectifier K^{+} current and a fast Na^{+} current were originally obtained by Hodgkin and Huxley [76] as part of their Nobel prize winning work on the generation of action potentials in the squid giant axon. The delayedrectifier K^{+} current is responsible for terminating an action potential by repolarizing a neuron. We find that opening of the K^{+} channel requires structural changes in four identical and independent subunits so that $P_{\mathrm{K}} = n^{4}$ where n is the probability that any one gate subunit has opened. In the case of the fast Na^{+} current, which is responsible for the rapid depolarization of a cell leading to action potential generation, the probability of an open channel takes the form $P_{\mathrm{Na}}=m^{3} h$ where $m^{3}$ is the probability that an activating gate is open and h is the probability that an inactivating gate is open. Depolarization causes m to increase and h to decrease, whereas hyperpolarization has the opposite effect.
The dynamics of the gating variables m, n, h are usually formulated in terms of a simple kinetic scheme that describes voltagedependent transitions of each gating subunit between open and closed states. More specifically, for each $Y \in\{m,n,h \}$,
where $\alpha_{Y}(v)$ is the rate of the transition $\mathit{closed} \rightarrow \mathit{open}$, and $\beta_{Y}(v)$ is the rate of the reverse transition $\mathit{open} \rightarrow \mathit{closed}$. From basic thermodynamic arguments, the opening and closing rates are expected to be exponential functions of the voltage v:
Hodgkin and Huxley originally fitted exponentiallike functions to the experimental data obtained from the squid axon. The corresponding conductancebased model (in the absence of synaptic inputs) can then be written in the form
with
Here $I_{\mathrm{L}}=g_{\mathrm{L}}(v  V_{\mathrm{L}})$ is called a leak current, which represents the passive flow of ions through nongated channels.
Morris–Lecar Model
It is often convenient to consider a simplified planar model of a neuron, which tracks the membrane voltage v, and a recovery variable w that represents the fraction of open potassium channels. The advantage of a twodimensional model is that we can use phaseplane analysis to develop a geometric picture of neuronal spiking. One wellknown example is the Morris–Lecar (ML) model [100]. Although this model was originally developed to model Ca^{2+} spikes in molluscs, it has been widely used to study neural excitability for Na^{+} spikes [48], since it exhibits many of the same bifurcation scenarios as more complex models. The ML model has also been used to investigate subthreshold membrane potential oscillations (STOs) due to persistent Na^{+} currents [27, 145]. Another advantage of the ML model is that it is straightforward to incorporate intrinsic channel noise [80, 109, 114, 132]. To capture the fluctuations in membrane potential from stochastic switching in voltagegated ion channels, we will consider a stochastic version of the ML model that includes both discrete jump processes (to represent the opening and closing of Ca^{2+} or Na^{+} ion channels) and a twodimensional continuoustime piecewise process (to represent the membrane potential and recovery variable w). We thus have an explicit example of a twodimensional PDMP. (We can also consider fluctuations in the opening and closing of the K^{+} ion channels, in which w is replaced by an additional discrete stochastic variable, representing the fraction of open K^{+} channels [114, 132]. This would yield a onedimensional PDMP for the voltage alone.)
Deterministic Model
First, consider a deterministic version of the ML model [100] consisting of a fast inward calcium current (Ca^{2+}), a slow outward potassium current (K^{+}), a leak current (L), and an applied current ($I_{\mathrm{app}}$). (In [80, 114] the inward current is interpreted as a Na^{+} current, but the same parameter values as the original ML model are used.) For simplicity, each ion channel is treated as a twostate system that switches between an open and a closed state—the more detailed subunit structure of ion channels is neglected [64]. The membrane voltage v evolves as
where w is the K^{+} gating variable. It is assumed that Ca^{2+} channels are in quasisteady state $a_{\infty}(v)$, thus eliminating the fraction of open Ca^{2+} channels as a variable. For $i=\mathrm{K},\mathrm{Ca},{\mathrm{L}}$, let $f_{i}=g_{i}(V_{i}v)$, where $g_{i}$ are ion conductances, and $V_{i}$ are reversal potentials. Opening and closing rates of ion channels depend only on membrane potential v are represented by α and β, respectively, so that
For the ML model,
with $\beta_{\mathrm{Ca}}$, $v_{\mathrm{Ca},1}$, $v_{\mathrm{Ca}2}$ constant. The transition rates $\alpha_{\mathrm{K}}(v)$ and $\beta_{\mathrm{K}}(v)$ are chosen such that
The dynamics of this system can be explored using phaseplane analysis as illustrated in Fig. 4 for an excitable regime. Exploiting the fact that the K^{+} dynamics is much slower than the voltage and Ca^{2+} dynamics, we can use a slow/fast analysis to investigate the initiation of an action potential following a perturbing stimulus [81]. The ML model can also support oscillatory solutions; see also Sect. 6.
Stochastic Model
The deterministic ML model holds under the assumption that the number of ion channels is very large, thus the ion channel activation can be approximated by the average ionic currents. However, it is known that channel noise does affect membrane potential fluctuations and thus neural function [146]. To account for ion channel fluctuations, we consider a stochastic version of the ML model [80, 114, 132], in which the number N of Ca^{2+} channels is taken to be relatively small. (For simplicity, we ignore fluctuations in the K^{+} channels by taking the number of the latter to be very large.) Let $n(t)$ be the number of open Ca^{2+} channels at time t, which means that there are $Nn(t)$ closed channels. The voltage and recovery variables then evolve according to the following PDMP:
for $n(t)=n$. Suppose that individual channels switch between open (O) and closed (C) states via a twostate Markov chain,
It follows that at the population level, the number of open ion channels evolves according to a birth–death process with
Note that we have introduced the small parameter ε to reflect the fact that Ca^{2+} channels open and close much faster than the relaxation dynamics of the system $(v,w)$. This is consistent with the parameter values of the ML model, where the slowness of the K^{+} channels is reflected by the fact that the parameter $\phi =0.04~\mbox{ms}^{1}$, the membrane rate constant is of order $0.05~\mbox{ms}^{1}$, whereas the transition rates of Ca^{2+} or Na^{+} channels are of order $1~\mbox{ms}^{1}$. The stationary density of the birth–death process is
The corresponding CK equation is
Comparison with the general CK equations (2.6) shows that $x=(v,w)$, $\nabla= (\partial_{v},\partial_{w})^{\top}$,
and A is the tridiagonal generator matrix of the birth–death process. Carrying out the QSS diffusion approximation of Sect. 2.2 then yields the following Ito FP equation for $C(v,w,t)=\sum_{n=0}^{N}p_{n}(v,w,t)$ (see also [27]):
with
and
The last line follows from a calculation in [80].
Almost all previous studies of ion channel fluctuations are based on some form of diffusion approximation, thus reducing the continuous dynamics to an effective Langevin equation [32, 54, 64, 146]. However, these various approximations can lead to exponentially large errors in estimates for quantities such as the rate at which noisedriven action potentials are generated in the excitable regime. This has motivated recent work that deals directly with the CK equation (3.13). For example, Keener and Newby [80, 115] consider the simplified problem of how ion channel fluctuations affect the initiation of an action potential due to the opening of a finite number of Ca^{2+} or Na^{+} channels. The slow K^{+} channels are assumed to be frozen, so that they effectively act as a leak current, and each sodium channel is treated as a single activating subunit. The recovery variable w is thus fixed so the potassium current can be absorbed into the function $g(v):=[wf_{\mathrm{K}}(v)+f_{\mathrm{L}}(v)+I_{\mathrm{app}}]$. We then have the onedimensional PDMP
and the CK equation (3.13) reduces to
Since the righthand side of equation (3.16) is negative (positive) for large (small) v, it follows that there exists an invariant interval for the voltage dynamics. In particular, let $v_{0}$ denote the voltage for which $\dot{v}=0$ when $n=0$, and let $v_{N}$ be the corresponding voltage when $n=N$, that is, $g(v_{0})=0$ and $f_{\mathrm{Ca}}(v_{N})g(v_{N})=0$. Then $v(t)\in[v_{0},v_{N}]$ if $v(0)\in[v_{0},v_{N}]$. In the fast switching limit $\varepsilon \rightarrow0$, we obtain the firstorder deterministic rate equation
We have introduced the effective potential $\varPsi(v)$ whose minima and maxima correspond to stable and unstable fixed points of the meanfield equation. By plotting the potential Ψ, it is straightforward to show that equation (3.18) exhibits bistability for a range of stimuli $I_{\mathrm{app}}$, that is, there exist two stable fixed points $v_{\pm}$ separated by an unstable fixed point $v_{0}$; see Fig. 5. The problem of the spontaneous initiation of an action potential for small but finite ε thus reduces to an escape problem for a stochastic hybrid system, as outlined in Sect. 2.3.
Metastability in the Stochastic Ion Channel Model
To calculate the mean escape rate from the resting state $v_{}$ using the Arrhenius formula (2.48), we take $v\rightarrow x$ and calculate the functions $\varPhi_{0}(x)$, $k(x)$, and $D(x)$. In the case of the stochastic ion channel model, equation (2.39) takes the explicit form
Consider the trial solution
which yields the following equation relating Γ and $\varLambda_{0}$:
Collecting terms independent of n and terms linear in n yields the pair of equations
and
Eliminating Γ from these equation gives
This yields a quadratic equation for $\varLambda_{0}$ of the form
with
Along the zeroenergy surface $\varLambda_{0}(x,q)=0$, we have $h(x,q)=0$, which yields the pair of solutions
The normalized eigenfunction for the nontrivial case is
Note that $\varPhi_{0}'(x)$ vanishes at the fixed points $x_{},x_{0}$ of the meanfield equation (3.18) with $\varPhi_{0}'(x)>0$ for $0< x< x_{}$ and $\varPhi_{0}'(x)>0 $ for $x_{}< x< x_{0}$. In Fig. 6, we show solutions to Hamilton’s equations in the $(x,q)$plane, highlighting the zeroenergy maximum likelihood curve linking $x_{}$ and $x_{0}$. Note that $N\varPhi(x_{0})$, where $\varPhi(x_{0})$ is the area enclosed by the heteroclinic connection from $x_{}$ to $x_{0}$, gives the leading order contribution to logτ, where τ is the mean escape time.
The next step is to determine the null eigenfunction $S_{n}(x)$ of equation (2.43), which becomes
Trying a solution of the form $S_{m}(x)=\varGamma(x)^{m}$ yields
Γ is then determined by canceling terms independent of m:
Finally, a QSS analysis of the CK equation shows that [80]
where have used the fixed point condition $g(x_{0})=f(x_{0})a_{\infty}(x_{0})$.
Keener and Newby [80] calculated the MFPT ($\tau= 1/\lambda _{0}$) using equation (2.48) and showed that their results agreed very well with Monte Carlo simulations of the full system, whose probability density evolves according to the CK equation (3.17). A summary of their findings is shown schematically in Fig. 7, together with the corresponding MFPT obtained using a quasisteadystate diffusion approximation. The main observation is that although the Gaussianlike diffusion approximation does well in the superthreshold regime ($I_{\mathrm{app}}>I_{*}$), it deviates significantly from the full model results in the subthreshold regime $(I_{\mathrm{app}}< I_{*})$, where it overestimates the mean time to spike. This is related to the fact that the effective potential of the steadystate density under the diffusion approximation generates exponentially large errors in the MFPT.
In the above analysis of membrane voltage fluctuations, it was assumed that the potassium channel dynamics could be ignored during initiation of a spontaneous action potential (SAP). This corresponds to keeping the recovery variable w fixed. The resulting stochastic bistable model supported the generation of SAPs due to fluctuations in the opening and closing of fast Ca^{2+} or Na^{+} channels. However, it is also possible to generate a SAP due to fluctuations causing several K channels to close simultaneously, effectively decreasing w, and thereby causing v to rise. It follows that keeping w fixed in the stochastic model excludes the latter mechanism, and thus the resulting MFPT calculation underestimates the spontaneous rate of action potentials. To investigate this phenomenon, it is necessary to consider the full stochastic ML model given by equations (3.9) with a multiplicative noise term added to the dynamics of the recovery variable, which takes into account a finite number M of potassium ion channels. An additional complication is that the full model is an excitable rather than a bistable system, so it is not straightforward to relate the generation of SAPs with a noiseinduced escape problem. Nevertheless, Newby et al. [110, 114] used WKB methods to identify the most probable paths of escape from the resting state and obtained the following results:

(i)
The most probable paths of escape dip significantly below the resting value for w, indicating a breakdown of the deterministic slow/fast decomposition.

(ii)
Escape trajectories all pass through a narrow region of state space (bottleneck or stochastic saddle node) so that, although there is no welldefined separatrix for an excitable system, it is possible to formulate an escape problem by determining the MFPT to reach the bottleneck from the resting state.
Stochastic Gap Junctions and Randomly Switching Environments
Many neurons in the mammalian central nervous system communicate via gap junctions, also known as electrical synapses [35]. Gap junctions are arrays of transmembrane channels that connect the cytoplasm (aqueous interior) of two neighboring cells and thus provide a direct diffusion pathway for ionic current and small organic molecules to move between cells. In many cases the electrical coupling is strong enough to mediate the synchronization of subthreshold and spiking activity among clusters of neurons. Cells sharing a gap junction channel each provide a hemichannel (also known as a connexon) that connect headtohead [50, 66, 127]; see Fig. 8(a). Each hemichannel is composed of proteins called connexins that exist as various isoforms named Cx23 through Cx62, with Cx43 being the most common. Just as with the opening and closing of ion channels (see Sect. 2), gap junctions can be gated by both voltage and chemical agents. There appear to be at least two gating mechanisms associated with gap junctions [31], as illustrated in Fig. 8(b). Even when a gap junction is open, it tends to restrict the flow of molecules, and this is typically modeled by assuming that a gap junction has a certain channel permeability [81]. Given that gap junctions are gated, this suggests that thermal fluctuations could result in the stochastic opening and closing of gap junctions in an analogous fashion to ion channels. There has been relatively little work on the effects of thermal noise on gap junction diffusive coupling, beyond modeling the voltage characteristics of a single stochasticallygated gap junction [120]. Recently, however, there have been several studies on analyzing the effective permeability of stochastic gap junctions by formulating the problem as diffusion in a domain with randomly switching internal barriers, which is modeled as a piecewise deterministic PDE [12, 19].
To introduce the basic theory, we begin with the simpler problem of diffusion in a bounded interval with a randomly switching exterior boundary [11, 92]. The latter can represent the random opening and closing of a stochastic ion channel in the plasma membrane of a cell or a subcellular compartment [17].
Diffusion on an Interval with a Switching Exterior Boundary
Consider particles diffusing in the finite interval $[0,L]$ with a fixed absorbing boundary at $x=0$ and a randomly switching gate at $x=L$, see Fig. 9. Let $N(t)\in\{0,1\}$ denote the discrete state of the gate such that it is open when $N(t)=1$ and is closed when $N(t)=0$. Assume that $N(t)$ evolves according to a twostate Markov process with switching rates α, β:
Consider a particular realization $\sigma(T)=\{N(t), 0\leq t \leq T\}$ of the gate, and let $u(x,t)$ denote the population density of particles in state x at time t given the realization $\sigma(T)$ up to time T. The population density evolves according to the diffusion equation
with u satisfying the boundary conditions
and $J(x,t)=D\partial_{x}u(x,t)$. We are assuming that when the gate is open, the system is in contact with a particle bath of density η. Note that equation (4.2a)–(4.2b) only holds between jumps in the state of the gate, so that it is an example of a piecewise deterministic PDE. Since each realization of the gate will typically generate a different solution $u(x,t)$, it follows that $u(x,t)$ is a random field.
Derivation of Moment Equations
In [18] a method has been developed for deriving moment equations of the stochastic density $u(x,t)$ in the case of particles diffusing in a domain with randomly switching boundary conditions. The basic approach is to discretize the piecewise deterministic diffusion equation (4.2a)–(4.2b) with respect to space using a finitedifference scheme and then to construct the differential CK equation for the resulting finitedimensional stochastic hybrid system. One of the nice features of finitedifferences is that we can incorporate the boundary conditions into the resulting discrete linear operators. Since the CK equation is linear in the dependent variables, we can derive a closed set of moment equations for the discretized density and then retake the continuum limit. (For an alternative, probabilistic approach to deriving moment equations, see [90].)
The first step is to introduce the lattice spacing a such that $(N+1)a=L$ for integer N and let $u_{j}=u(aj)$, $j=0,\ldots, N+1$. Then we obtain the PDMP
for $n=0,1$. Away from the boundaries ($i\neq1,N$), $\varDelta ^{n}_{ij}$ is given by the discrete Laplacian
On the lefthand absorbing boundary, we have $u_{0}=0$, whereas on the righthand boundary, we have
These can be implemented by taking
and
Let $\mathbf {u}(t)=(u_{1}(t),\ldots,u_{N}(t))$ and introduce the probability density
where we have dropped the explicit dependence on initial conditions. The probability density evolves according to the following differential CK equation for the stochastic hybrid system (4.3) (see Sect. 2.1):
where A is the matrix
Since the drift terms in the CK equation (4.6) are linear in the $u_{j}$, it follows that we can obtain a closed set of equations for the moment hierarchy.
Let
Multiplying both sides of the CK equation (4.6) by $u_{k}(t)$ and integrating with respect to u give (after integrating by parts and using that $p_{n}(\mathbf {u},t)\rightarrow0$ as $\mathbf {u}\rightarrow\infty$ by the maximum principle)
We have assumed that the initial discrete state is distributed according to the stationary distribution $\rho_{n}$, so that
Equations for rthorder moments $r\geq2$ can be obtained in a similar fashion. Let
Multiplying both sides of the CK equation (4.6) by $u_{k_{1}}(t)\cdots u_{k_{r}}(t)$ and integrating with respect to u give (after integration by parts)
Finally, taking the continuum limit $a\rightarrow0$ in equation (4.9) and setting
we obtain the firstorder moment equations
with
and
A similar procedure can be used to derive higherorder moment equations [18]. For example, the secondorder moments
satisfy the equations
and couple to the firstorder moments via the boundary conditions
and
One of the important points to highlight regarding the stochastic diffusion equation (4.2a)–(4.2b) is that it describes a population of particles diffusing in the same random environment. This means that although the particles are noninteracting, statistical correlations arise at the population level. The inequality follows from the observation that the secondorder moment equations are nonseparabale, that is,
Analysis of FirstOrder Moments
The steadystate solution of equations (4.13a) and (4.13b) can be determined explicitly. First, note that
Since equations equations (4.13a) and (4.13b) have a globally attracting steadystate, it follows that
where $V_{n}(x)\equiv\lim_{t\to\infty}V_{n}(x,t)$. Adding equations (4.13a) and (4.13b) and using the boundary conditions in equation (4.14) give
where $\kappa=V_{0}(L)$ has to be determined. Hence
Setting $V_{1}=VV_{0}$ in equation (4.13a) then shows that
with $V_{0}(0)=0,\partial_{x}V_{0}(L)=0$. It follows that
with $\xi=\sqrt{(\alpha+ \beta)/D}$. The boundary conditions imply that
which yields the solution
Finally, we obtain κ by setting $x=L$:
which can be rearranged to yield
In the limit $\xi\rightarrow\infty$ (fast switching),
Diffusive Flux Along a OneDimensional Array of Electrically Coupled Neurons
Let us now consider a simple onedimensional (1D) model of molecules diffusing along a line of M cells that are connected via gap junctions, see Fig. 10. For the moment, we ignore the effects of stochastic gating. Since gap junctions have relatively high resistance to flow compared to the cytoplasm, we assume that each intercellular membrane junction acts like an effective resistive pore with some permeability μ. Suppose that we label the cells by an integer k, $k=1,\ldots,M$, and take the length of each cell to be L. Let $u(x,t)$ for $x\in([k1]L,kL)$ denote the particle concentration within the interior of the kth cell, and assume that it evolves according to the diffusion equation
However, at each of the intercellular boundaries $x=l_{j}\equiv jL$, $j=1,\ldots,M1$, the concentration is discontinuous due to the permeability of the gap junctions. Conservation of diffusive flux across each boundary then implies that
where the superscripts + and − indicate that the function values are evaluated as limits from the right and left, respectively. Finally, it is necessary to specify the exterior boundary conditions at $x=0$ and $x=ML$. We impose Dirichlet boundary conditions with $u(0,t)=\eta$ and $u(ML,t)=0$.
In steadystate, there is a constant flux $J_{0}=DK_{0}$ through the system, and the steadystate concentration takes the form
for the $M1$ unknowns $K_{0},U_{k}=u((k1)L)$, $k=2,\ldots,M1$. These are determined by imposing the $M1$ boundary conditions (4.26) in steady state:
Rearranging equations (4.28a) gives
which can be iterated to give
Since we also have
it follows that [81]
Introducing the effective diffusion coefficient $D_{e}$ according to
we see that, for large M,
Effective Permeability for Cells Coupled by Stochastically Gated Gap Junctions
This deterministic model has recently been extended to incorporate the effects of stochastically gated gap junctions [12]. The resulting model can be analyzed by extending the theory of diffusion in domains with randomly switching exterior boundaries [18] (see Sect. 4.1) to the case of switching interior boundaries. Solving the resulting firstorder moment equations of the stochastic concentration allows us to calculate the mean steadystate concentration and flux, and thus extract the effective singlegate permeability of the gap junctions.
We start by looking at a pair of stochasticallycoupled cells; see Fig. 11. For the sake of generality, we allow the two cells to have different lengths l and $2Ll$ with $0< l\leq L$. The basic problem can be formulated as follows: We wish to solve the diffusion equation in the open domain $\varOmega=\varOmega_{1}\cup\varOmega_{2}$ with $\varOmega_{1}=(0,l)$ and $\varOmega_{2}=(l,2L)$, with the interior boundary between the two subdomains at $x=l$ randomly switching between an open and a closed state. Let $N(t)$ denote the discrete state of the gate at time t with $N(t)=0$ if the gate is closed and $N(t)=1$ if it is open. Assume that transitions between the two states $n=0,1$ are described by the twostate Markov process (4.1). The random opening and closing of the gate means that particles diffuse in a random environment according to the piecewise deterministic equation
with u satisfying Dirichlet boundary conditions on the exterior boundaries of Ω,
and $N(t)$dependent boundary conditions on the interior boundary at $x=l$:
and
where $l^{\pm}=\lim_{\varepsilon \rightarrow0^{+}}l\pm \varepsilon $. That is, when the gate is open, there is continuity of the concentration and the flux across $x=l$, whereas when the gate is closed, the righthand boundary of $\varOmega_{1}$ and the lefthand boundary of $\varOmega_{2}$ are reflecting. For simplicity, we assume that the diffusion coefficient is the same in both compartments, so that the piecewise nature of the solution is solely due to the switching gate. For illustration, we take the exterior boundary conditions to be Dirichlet, but the analysis is easily modified, for example, in the case of a Neumann boundary condition at one of the ends.
FirstOrder Moment Equations and Effective Permeability ($M=2$)
To determine the effective permeability of a stochastically gated gap junction, we need to calculate the mean of the concentration $u(x,t)$ defined by equation (4.19). The corresponding firstorder moment equations for $V_{n}$ can be derived along similar lines to the case of 1D diffusion in a domain with an exterior gate. We thus obtain equations (4.13a) and (4.13b) for $x\in\varOmega_{1}\cup\varOmega_{2}$ with exterior boundary conditions [12]
and interior boundary conditions
As in Sect. 4.1, we will analyze the steadystate solution. From the interior boundary conditions (4.38) we set
with $K_{1}$ to be determined later by imposing $V_{1}(l^{})=V_{1}(l^{+})$. Adding equations (4.13a) and (4.13b) and imposing the boundary conditions then give
and
This yields the piecewise linear solution
Since $V_{1}=VV_{0}$, we can rewrite equation (4.13a) as
with $V_{0}(0)=\rho_{1} \eta$, $V_{0}(2L)=0$, and $\partial _{x}V_{0}(l^{})=0=\partial_{x}V_{0}(l^{+})$. Substituting for $V(x)$ using equation (4.41), we obtain a piecewise solution of the form
with $\xi=\sqrt{(\alpha+\beta)/D}$. We have imposed the exterior boundary conditions. The interior boundary conditions for $V_{0}$ then determine the coefficients B, C in terms of $K_{1}$ so that we find
Finally, we determine the unknown coefficient $K_{1}$ by requiring that $V_{1}(x)$ is continuous across $x=l$, that is,
which yields the result
This can be rearranged to yield the following result for the mean flux through the gate, $J_{0}=DK_{0}$:
Comparison with equation (4.30) for $M=2$ and $l=L$ implies that the stochastically gated gap junction has the effective permeability $\mu_{e}$ with
It is useful to note some asymptotic properties of the solution given by equations (4.41) and (4.45). First, in the fast switching limit $\xi\rightarrow\infty$, we have $J_{0}\rightarrow\eta D/2L$, $\mu_{e}\rightarrow\infty$, and equation (4.41) reduces to the continuous steadystate solution
The mean flux through the gate is the same as the steadystate flux without a gate. On the other hand, for finite switching rates, the mean flux $J_{0}$ is reduced. In the limit $\alpha\rightarrow0$ (gate always closed), $J_{0}\rightarrow0$, so that $V(x)=\eta$ for $x\in[0,l)$ and $V(x)=0$ for $x\in(l,L]$. Finally, in the limit $l\rightarrow2L$, we recover the result for 1D diffusion in a single domain with a switching external boundary [11, 92] (see also equation (4.24)):
Multicell Model ($M>2$)
Let us return to the general case of a line of M identical cells of length L coupled by $M1$ gap junctions at positions $x=l_{k}=kL$, $1\leq k \leq M1$; see Fig. 10. (Interestingly, such a model is formally equivalent to a signaling model analyzed in [94].) The analysis is considerably more involved if the gap junctions physically switch because there are significant statistical correlations arising from the fact that all the particles move in the same random environment, which exists in $2^{M1}$ different states if the gates switch independently [12]. Therefore we will restrict the analysis to the simpler problem in which individual particles independently switch conformational states: if a particle is in state $N(t)=0$, then it cannot pass through a gate, whereas if it is in state $N(t)=1$, then it can. Hence, from the particle perspective, either all gates are open, or all gates are closed. If $V_{n}(x,t)$ is the concentration of particles in state n, then we have the pair of PDEs given by equations (4.13a) and (4.13b) on the domain $x\in[0,ML]$, except now the exterior boundary conditions are
and the interior boundary conditions at the jth gate are
These equations can be solved along similar lines to the twocell case [12]. This ultimately yields the following expression for the flux $J_{0}$:
We deduce that the effective permeability $\mu_{e}(M)$ in the case of M cells with $M1$ independent, stochastically gated gap junctions is
This reduces to equation (4.46) when $M=2$. We conclude that the effective singlegate permeability is Mdependent with
Volume Neurotransmission
Although many neurons communicate via synapsespecific connections or gap junctions, it is also possible for populations of neurons to make nonspecific connections via volume neurotransmission [33, 58]; see Fig. 12. For example, neurons may send projections to some distant nucleus or subnucleus, where they increase the concentration of neurotransmitter within the extracellular space surrounding the nucleus. The resulting increase in concentration modulates the electrophysiological neural activity in the distant region by binding of neurotransmitter to receptors on the target cells. One important class of volume transmission involves axonal projections transmitting neuromodulators such as dopamine and serotonin from brain stem nuclei to other brain regions such as the striatum and cortex.
Recently, volume transmission has been formulated as another example of diffusion in a randomly switching environment [91]. Here, the environment is the extracellular volume surrounding the target cells, whereas each axonal terminal acts as a source of neurotransmitter when the source neuron fires and is a sink for neurotransmitter otherwise. The latter is due to the reuptake of neurotransmitter into the terminals. Lawley et al. [91] consider diffusion on a finite interval $[0,L]$ as in Sect. 4.1 but with modified boundary conditions. One example assumes a reflecting boundary at $x=0$ and a switching boundary at $x=L$ due to the presence of a source cell at the righthand side. The boundary condition thus switches between absorbing when the neuron is not firing (quiescent state $N(t)=0$) and constant flux when the neuron is firing (firing state $N(t)=1$). This yields the system of equations
with u satisfying the boundary conditions
Analysis of the firstorder moment equations for $V_{n}(x)=\mathbb {E}[u(x,t)1_{N(t)=n}]$ establishes that in steadystate the total mean concentration $V=V_{0}(x)+V_{1}(x)$ is independent of spatial location x with [91]
where
Here α is the switching rate from the quiescent state to the firing state, and β is the switching rate of the reverse transition. Thus we observe the same mean concentration V throughout the extracellular domain, even though some parts are further away from the source than others. Consistent with intuition, V increases with μ, which reflects the fact that the neuron on the boundary fires more often. Now suppose that both α and β become large (fast switching) but their ratio μ is fixed. In this case, η becomes large, and $V\rightarrow0$. This is due to the fact that any neurotransmitter that is released is rapidly reabsorbed at the same terminal. (Note that if the lefthand boundary is taken to be absorbing rather than reflecting, $u(0,t)=0$, then the concentration is a linear function of x; this could represent a glial cell on the lefthand boundary, which absorbs neurotransmitter but does not fire.) The authors also consider the case where there is a source neuron at each end, so that each boundary switches according to an independent twostate Markov process. If we denote the two Markov processes by the discrete variables $M(t)\in\{0,1\}$ and $N(t)\in\{0,1\}$, respectively, then the boundary conditions become [91]
and
Now we find that the mean concentration approaches a uniform concentration, provided that the two neurons are identical; otherwise, the concentration is a linear function of x.
Stochastic Vesicular Transport in Axons and Dendrites
The efficient delivery of mRNA, proteins, and other molecular products to their correct location within a cell (intracellular transport) is of fundamental importance to normal cellular function and development [1, 23]. The challenges of intracellular transport are particularly acute for neurons, which are amongst the largest and most complex cells in biology, in particular, with regards to the efficient trafficking of newly synthesized proteins from the cell body or soma to distant locations on the axon and dendrites. In healthy cells, the regulation of mRNA and protein trafficking within a neuron provides an important mechanism for modifying the strength of synaptic connections between neurons [9, 34, 72, 139], and synaptic plasticity is generally believed to be the cellular substrate of learning and memory. On the other hand, various types of dysfunction in protein trafficking appear to be a major contributory factor to a number of neurodegenerative diseases associated with memory loss, including Alzheimer’s [38].
Broadly speaking, there are two basic mechanisms for intracellular transport: passive diffusion within the cytosol or the surrounding plasma membrane of the cell, and active motordriven transport along polymerized filaments such as microtubules and Factin that comprise the cytoskeleton. Newly synthesized products from the nucleus are mainly transported to other intracellular compartments or the cell membrane via a microtubular network that projects radially from organizing centres (centrosomes) and forms parallel fiber bundles within axons and dendrites. The same network is used to transport degraded cell products back to the nucleus. Moreover, various animal viruses including HIV take advantage of microtubulebased transport in order to reach the nucleus from the cell surface and release their genome through nuclear pores [36]. Microtubules are polarized filaments with biophysically distinct plus and minus ends. In general, a given molecular motor will move with a bias toward a specific end of the microtubule; for example, kinesin moves toward the (+) end and dynein moves toward the (−) end. Microtubules are arranged throughout an axon or dendrite with a distribution of polarities: in axons and distal dendrites, they are aligned with the (−) ends pointing to the soma (plusendout), and in proximal dendrites, they have mixed polarity.
Axons of neurons can extend up to 1 m in large organisms, but synthesis of many of its components occurs in the cell body. Axonal transport is typically divided into three main categories based upon the observed speed [29]: fast transport (1–9 μm/s) of organelles and vesicles and slow transport (0.004–0.6 μm/s) of soluble proteins and cytoskeletal elements. Slow transport is further divided into two groups; actin and actinbound proteins are transported in slow component A, whereas cytoskeletal polymers such as microtubules and neurofilaments are transported in slow component B. It had originally been assumed that the differences between fast and slow components were due to differences in transport mechanisms, but direct experimental observations now indicate that they all involve fast motors but differ in how the motors are regulated. Membranous organelles, which function primarily to deliver membrane and protein components to sites along the axon and at the axon tip, move rapidly in a unidirectional manner, pausing only briefly. In other words, they have a high duty ratio—the proportion of time a cargo complex is actually moving. On the other hand, cytoskeletal polymers and mitochondria move in an intermittent and bidirectional manner, pausing more often and for longer time intervals, and sometimes reversing direction. Such a transport has a low duty ratio.
Another example of a transport process in neurons that exhibits bidirectionality is the trafficking of mRNA containing granules within dendrites. There is increasing experimental evidence that local protein synthesis in the dendrites of neurons plays a crucial role in mediating persistent changes in synaptic structure and function, which are thought to be the cellular substrates of longterm memory [8, 82, 133]. This is consistent with the discovery that various mRNA species and important components of the translational machinery, such as ribosomes, are distributed in dendrites. Although many of the details concerning mRNA transport and localization are still unclear, a basic model is emerging. First, newly transcribed mRNA within the nucleus binds to proteins that inhibit translation, thus allowing the mRNA to be sequestered away from the proteinsynthetic machinery within the cell body. The repressed mRNAs are then packaged into ribonucleoprotein granules that are subsequently transported into the dendrite via kinesin and dynein motors along microtubules. Finally, the mRNA is localized to an activated synapse by actinbased myosin motor proteins, and local translation is initiated following neutralization of the repressive mRNAbinding protein. Details regarding the motordriven transport of mRNA granules in dendrites have been obtained by fluorescently labeling either the mRNA or mRNAbinding proteins and using livecell imaging to track the movement of granules in cultured neurons [44, 86, 125]. It has been found that, under basal conditions, the majority of granules in dendrites are stationary or exhibit small oscillations around a few synaptic sites. However, other granules exhibit rapid retrograde (toward the cell body) or anterograde (away from the cell body) motion consistent with bidirectional transport along microtubules. These movements can be modified by neuronal activity as illustrated in Fig. 13. In particular, there is an enhancement of dendritically localized mRNA due to a combination of newly transcribed granules being transported into the dendrite, and the conversion of stationary or oscillatory granules already present in the dendrite into anterogrademoving granules.
Intracellular Transport as a Velocity Jump Process
In terms of the general theme of this review, intracellular transport models are relevant because they consist of a special type of PDMP known as a velocity jump process [57, 112, 113, 122, 123]. In the case of onedimensional transport along a filament, an individual particle moves according to the piecewise deterministic ODE
where the discrete random variable $n(t)\in\varGamma$ indexes the current velocity state $v_{n(t)}$. The simplest example is a particle switching between an anterograde state with velocity $v_{1}>0$ and a retrograde state of velocity $v_{0} <0$, so that we have
In the physics literature, $\xi(t)$ is called a dichotomous Markov noise process (DMNP); see the review [5]. The corresponding CK equation is
where α, β are the corresponding switching rates, which can depend on the current position x. In applications, we are typically interested in the marginal density $p(x,t)=p_{0}(x,t)+p_{1}(x,t)$, which can be used to calculate moments of p such as the mean and variance,
In the unbiased case, $v_{1}= v$, $v_{0}=v$, $\alpha=\beta$, the marginal probability density $p(x,t)$ satisfies the telegrapher’s equation
(The individual densities $p_{0,1}$ satisfy the same equations.) The telegrapher’s equation can be solved explicitly for a variety of initial conditions. More generally, the shorttime behavior (for $t\ll 1/\alpha$) is characterized by wavelike propagation with $\langle x(t)\rangle^{2}\sim(vt)^{2}$, whereas the longtime behavior ($t\gg 1/\alpha$) is diffusive with $\langle x^{2}(t)\rangle\sim2Dt$, $D=v^{2}/2\alpha$. As an explicit example, the solution for the initial conditions $p(x,0)=\delta(x)$ and $\partial_{t}p(x,0)=0$ is given by
where $I_{n}$ is the modified Bessel function of nth order, and Θ is the Heaviside function. The first two terms clearly represent the ballistic propagation of the initial data along characteristics $x=\pm vt$, whereas the Bessel function terms asymptotically approach Gaussians in the large time limit. The steadystate equation for $p(x)$ is simply $p''(x)=0$, which from integrability means that $p(x)=0$ pointwise. This is consistent with the observation that the above explicit solution satisfies $p(x,t)\rightarrow0$ as $t\rightarrow \infty$.
One of the first examples of modeling intracellular transport as a velocity jump process was within the context of the slow axonal transport of neurofilaments [6, 57, 123]. Neurofilaments are spacefilling cytoskeletal polymers that increase the crosssectional area of axons, which then increases the propagation speed of action potentials. Radioisotopic pulse labeling experiments provide information about the transport of neurofilaments at the population level, which takes the form of a slowly moving Gaussianlike wave that spreads out as it propagates distally. Blum and Reed [6] considered the following system on the semiinfinite domain $0\leq x <\infty$:
where $p_{1}$ represents the concentration of moving neurofilament proteins, and $p_{i}$, $i>1$, represent the concentrations in $n1$ distinct stationary states. In contrast to the twostate model of bidirectional transport, the system jumps between a single anterograde state and a set of stationary states. Conservation of mass implies that $A_{jj}=\sum_{i\neq j}A_{ij}$. The initial condition is $p_{i}(x,0)=0$ for all $1\leq i \leq n$, $0< x<\infty$. Moreover $p_{1}(0,t)=1$ for $t >0$. Reed et al. [123] carried out an asymptotic analysis of equations (5.4a)–(5.4b) that is related to the QSS reduction method of Sect. 2.2. Suppose that $p_{1}$ is written in the form
where u is the effective speed, $u=v{p_{1}^{\mathrm{ss}}}/{\sum_{j=1}^{n}p_{j}^{\mathrm{ss}}}$, and $\mathbf{p}^{\mathrm{ss}}$ is the steadystate solution for which $\mathbf{A}\mathbf{p}^{\mathrm{ss}}=0$. They then showed that $Q_{\varepsilon }(s,t)\rightarrow Q_{0}(s,t)$ as $\varepsilon \rightarrow0$, where $Q_{0}$ is a solution to the diffusion equation
with H the Heaviside function. The diffusivity D can be calculated in terms of v and the transition matrix A. Hence the propagating and spreading waves observed in experiments could be interpreted as solutions to an effective advection–diffusion equation. More recently, [56, 57] have developed a more rigorous analysis of spreading waves. Note that the large time behavior is consistent with the solution of the diffusion equation obtained in the fast switching limit.
In contrast to these population models, direct observations of neurofilaments in axons of cultured neurons using fluorescence microscopy has demonstrated that individual neurofilaments are actually transported by fast motors but in an intermittent fashion [142]. Hence, it has been proposed that the slow rate of movement of a population is an average of rapid bidirectional movements interrupted by prolonged pauses, the socalled stopandgo hypothesis [28, 77, 93]. Computational simulations of an associated system of PDEs shows how fast intermittent transport can account for the slowly spreading wave seen at the population level. One version of the model assumes that the neurofilaments can be in one of six states [28, 93]: anterograde moving on track (state a), anterograde pausing on track ($a_{0}$ state), anterograde pausing off track (state $a_{p}$), retrograde pausing on track (state $r_{0}$), retrograde pausing off track (state $r_{p}$), and retrograde moving on track (state r). The state transition diagram is shown in Fig. 14.
TugofWar Model of Bidirectional Motor Transport
The observation that many types of motordriven cargo move bidirectionally along microtubules suggests that cargo is transported by multiple kinesin and dynein motors. In proximal dendrites, it is also possible that one or more identical motors move a cargo bidirectionally by switching between microtubules with different polarities. In either case, it is well established that multiple molecular motors often work together as a motorcomplex to pull a single cargo [144]. An open question concerns how the set of molecular motors pulling a vesicular cargo are coordinated. One possibility is that the motors compete against each other in a tugofwar where an individual motor interacts with other motors through the force it exerts on the cargo. If the cargo places a force on a motor in the opposite direction it prefers to move, then it will be more likely to unbind from the microtubule. A recent biophysical model has shown that a tugofwar can explain the coordinated behavior observed in certain animal models [101, 102].
Suppose that a certain vesicular cargo is transported along a onedimensional track via $N_{+}$ rightmoving (anterograde) motors and $N_{}$ leftmoving (retrograde motors). At a given time t, the internal state of the cargomotor complex is fully characterized by the numbers $n_{+}$ and $n_{}$ of anterograde and retrograde motors that are bound to a microtubule and thus actively pulling on the cargo. Assume that over the timescales of interest all motors are permanently bound to the cargo, so that $0 \leq n_{\pm}\leq N_{\pm}$. The tugofwar model of Muller et al. [101, 102] assumes that the motors act independently, other than exerting a load on motors with the opposite directional preference. (However, some experimental work suggests that this is an oversimplification, that is, there is some direct coupling between motors [42]). Thus the properties of the motor complex can be determined from the corresponding properties of the individual motors together with a specification of the effective load on each motor. There are two distinct mechanisms whereby such bidirectional transport could be implemented [102]. First, the track could consist of a single polarized microtubule filament (or a chain of such filaments) on which up to $N_{+}$ kinesin motors and $N_{}$ dynein motors can attach; see Fig. 15. Since individual kinesin and dynein motors have different biophysical properties, with the former tending to exert more force on a load, it follows that even when $N_{+}=N_{}$, the motion will be biased in the anterograde direction. Hence, this version is referred to as an asymmetric tugofwar model. Alternatively, the track could consist of two parallel microtubule filaments of opposite polarity such that $N_{+}$ kinesin motors can attach to one filament and $N_{}$ to the other. In the latter case, if $N_{+}=N_{}$, then the resulting bidirectional transport is unbiased resulting in a symmetric tugofwar model.
When bound to a microtubule, the velocity of a single molecular motor decreases approximately linearly with force applied against the movement of the motor [141]. Thus, each kinesin is assumed to satisfy the linear force–velocity relation
where F is the applied force in the retrograde direction, $F_{s}$ is the stall force satisfying $v(F_{s})=0$, $v_{f}$ is the forward motor velocity in the absence of an applied force in the preferred direction of the particular motor, and $v_{b}$ is the backward motor velocity when the applied force exceeds the stall force. Dynein motors will also be taken to satisfy a linear forcevelocity relation:
where now F is the force in the anterograde direction. Since the parameters associated with kinesin and dynein motors are different, we distinguish the latter by taking $F_{s}\rightarrow\widehat{F}_{s}$ etc. The original tugofwar model assumes that the binding rate of kinesin is independent of the applied force, whereas the unbinding rate is taken to be an exponential function of the applied force:
where $F_{d}$ is the experimentally measured force scale on which unbinding occurs. The force dependence of the unbinding rate is based on measurements of the walking distance of a single kinesin motor as a function of load [129], in agreement with Kramer’s rate theory [70]. Similarly, for dynein, we take
Let $F_{c}$ denote the net load on the set of anterograde motors. Suppose that the molecular motors are not directly coupled to each other, so that they act independently and share the load; however, see [42]. It follows that a single anterograde motor feels the force $F_{c}/n_{+}$. Equation (5.7) implies that the binding and unbinding rates for $n_{+}$ kinesin motors take the form
Similarly, each dynein motor feels the opposing force $F_{c}/n_{}$, so that the binding and unbinding rates for $n_{}$ dynein motors take the form
The cargo force $F_{c}$ is determined by the condition that all the motors move with the same cargo velocity $v_{c}$. Suppose that the net velocity is in the anterograde direction, which implies $F_{c}/(n_{}\widehat{F}_{s}) > 1 > F_{c}/(n_{+}F_{s})$. It follows from equations (5.5) and (5.6) that
This generates a unique solution for the load $F_{c}$ and cargo velocity $v_{c}$:
where
and
The corresponding expressions when the backward motors are stronger, $n_{+}F_{s+} < n_{}\widehat{F}_{s}$, are found by interchanging $(v_{f},\widehat{v}_{b})$ with $(\widehat{v}_{f},v_{b})$.
The original study of [101, 102] considered the stochastic dynamics associated with transitions between different internal states $(n_{+},n_{})$ of the motor complex, without specifying the spatial position of the complex along a 1D track. This defines a Markov process with a corresponding master equation for the time evolution of the probability distribution $P(n_{+},n_{},t)$. They determined the steadystate probability distribution of internal states and found that the motor complex exhibited at least three different modes of behavior: (i) the motor complex spends most of its time in states with approximately zero velocity; (ii) the motor complex exhibits fast backward and forward movement interrupted by stationary pauses, which is consistent with experimental studies of bidirectional transport; and (iii) the motor complex alternates between fast backward and forward movements. The transitions between these modes of behavior depend on motor strength, which primarily depends upon the stall force. The tugofwar model can also be formulated as a velocity jump process [112, 113]. This version of the tugofwar model simultaneously keeps track of the internal state of the motor complex and its location along a 1D track. That is, the position along the track evolves according to piecewise deterministic ODE
in between changes in the number of bound kinesin and dynein motors. The various state transitions are
As in previous examples, the corresponding CK equation can be reduced to an effective advection–diffusion equation in the limit that the rates of binding and unbinding of molecular motors are sufficiently fast [112, 113].
One of the useful features of the tugofwar model is that it allows various biophysical processes to be incorporated into the model. For example, a convenient experimental method for changing the stalling force (and hence the mode of motor behavior) is to vary the level of ATP available to the motor complex. At low $[\mathrm {ATP}]$ the motor has little fuel and is weaker, resulting in mode (i) behavior; then, as $[\mathrm {ATP}]$ increases and more fuel is available, mode (ii) behavior is seen until the stall force saturates at high values of $[\mathrm {ATP}]$ where mode (iii) behavior takes over. Thus, $[\mathrm {ATP}]$ provides a single control parameter that tunes the level of intermittent behavior exhibited by a motor complex [112]. Another potentially important signaling mechanism involves microtubule associated proteins (MAPs). These molecules bind to microtubules and effectively modify the freeenergy landscape of motormicrotubule interactions [134]. For example, tau is a MAP found in the axon of neurons and is known to be a key player in Alzheimer’s disease [88]. Another important MAP, called MAP2, is similar in structure and function to tau, but is present in dendrites; MAP2 has been shown to affect dendritic cargo transport [95]. Experiments have shown that the presence of tau or MAP2 on the microtubule can significantly alter the dynamics of kinesin; specifically, by reducing the rate at which kinesin binds to the microtubule [140]. This could be implemented by taking the binding rate $\gamma_{0}$ of kinesin to decrease within the domain of enhanced MAP concentration. This means that in the fast switching limit, we obtain the deterministic equation (2.8) with $\overline{F}(x)$ corresponding to an xdependent mean velocity. Suppose, for example, that $\overline{F}(x)=\bar{v}>0$ for $x\notin [Xl,X+l]$ and $\overline{F}(x)$ a unimodal function for $x\in [Xl,X+l]$ with a negative minimum at $x=X$. Here we are taking the region of enhanced τ to be an interval of length 2l centered about $x=X$. Writing $\overline{F}(x)=\varPsi'(xX)$, the corresponding deterministic potential has the form shown in Fig. 16. Since the mean velocity switches sign within the domain $[Xl,X+l]$, it follows that there exists one stable fixed point $x_{0}$ and an unstable fixed point $x_{*}$.
One interesting effect of a local increase in MAPs is that it can generate stochastic oscillations in the motion of the motorcomplex [113]. As a kinesin driven cargo encounters the MAPcoated trapping region, the motors unbind at their usual rate and can’t rebind. Once the dynein motors are strong enough to pull the remaining kinesin motors off the microtubule, the motorcomplex quickly transitions to (−) end directed transport. After the dyneindriven cargo leaves the MAPcoated region, kinesin motors can then reestablish (+) end directed transport until the motorcomplex returns to the MAPcoated region. This process repeats until the motorcomplex is able to move forward past the MAPcoated region. Interestingly, particle tracking experiments have observed oscillatory behavior during mRNA transport in dendrites [44, 125]. In these experiments, motordriven mRNA granules move rapidly until encountering a fixed location along the dendrite where they slightly overshoot then stop, move backward, and begin to randomly oscillate back and forth. After a period of time, lasting on the order of minutes, the motordriven mRNA stops oscillating and resumes fast ballistic motion. Calculating the mean time to escape, the target can be formulated as an FPT problem, in which the particle starts at $x=x_{0}$ and has to make a rare transition to the unstable fixed point at $x=x_{*}$. As in the analogous problem of stochastic action potential generation (Sect. 3), the QSS diffusion approximation breaks down for small ε, and we have to use the asymptotic methods of Sect. 2.3. The details can be found elsewhere [115].
Interestingly, there is recent evidence that the selective transport of cargo into the axon depends on the localized restriction of MAP2 to the proximal axon [67]. It is known that in both mammalian and Drosophila axons, secretory vesicles are trafficked by the cooperative action of two types of kinesin motors, KIF5 and KIF1 motors. Experimental studies of their motility indicate that MAP2 directly inhibits KIF5 motor activity and that axonal cargo entry and distribution depend on the balanced activities between KIF5 and KIF1 bound to the same cargo. That is, cargoes bound to the dominant motor KIF5 are unable to enter the axon, whereas those bound to motors that are not influenced by MAP2 are able to quickly enter the axon and move to the distal terminals. Moreover, cargoes bound to both KIF1 and KIF5 will enter the axon, but their axonal distribution will be affected by the reactivation of KIF5 past the proximal axon as the inhibition by MAP2 wears off, which slows down the transport; see Fig. 17.
Synaptic Democracy
A number of recent experimental studies of intracellular transport in axons of C. elegans and Drosophila have shown that (i) motordriven vesicular cargo exhibits “stop and go” behavior, in which periods of ballistic anterograde or retrograde transport are interspersed by long pauses at presynaptic sites, and (ii) the capture of vesicles by synapses during the pauses is reversible in the sense that the aggregation of vesicles can be inhibited by signaling molecules resulting in dissociation from the target [96, 148]. It has thus been hypothesized that the combination of inefficient capture at presynaptic sites and the backandforth motion of motorcargo complexes between proximal and distal ends of the axon facilitates a more uniform distribution of resources, that is, greater “synaptic democracy” [96].
The idea of synaptic democracy has previously arisen within the context of equalizing synaptic efficacies, that is, ensuring that synapses have the same potential for affecting the postsynaptic response regardless of their locations along the dendritic tree [71, 126]. An analogous issue arises within the context intracellular transport, since vesicles are injected from the soma (anterograde transport) so that one might expect synapses proximal to the soma to be preferentially supplied with resources. In principle, this could be resolved by routing cargo to specific synaptic targets, but there is no known form of molecular address system that could support such a mechanism, particularly in light of the dynamically changing distribution of synapses. From a mathematical perspective, the issue of synaptic democracy reflects a fundamental property shared by the onedimensional advection–diffusion equation used to model active transport and the cable equation used to model ionic current flow, namely, they generate an exponentially decaying steadystate solution in response to a localized source of active particles or current.
The hypothesized mechanism of synaptic democracy that combines bidirectional transport with reversible delivery of cargo to synaptic targets has recently been investigated in a series of modeling studies [13, 16, 20, 78]. Consider a simple threestate transport model of a single motorcomplex moving on a semiinfinite 1D track as shown in Fig. 18. The motor complex is taken to be in one of three motile states labeled by $n=0,\pm$: stationary or slowly diffusing with diffusivity $D_{0}$ ($n=0$), moving to the right (anterograde) with speed $v_{+}$ ($n=+$), or moving to the left (retrograde) with speed $v_{}$ ($n=$); transitions between the three states are governed by a discrete Markov process. In addition, the motor complex can carry a single vesicle, which is reversibly exchanged with membranebound synaptic targets when in the state $n=0$. Let $p_{n}(x,t) $ denote the probability density that at time t the complex is at position x, $x\in(0,\infty)$, is in motile state j, and a vesicle is not bound to the complex. Similarly, let $\widehat {p}_{n}(x,t)$ be the corresponding probability density when a vesicle is bound. We allow for the possibility that the velocities and diffusivity are different for the bound state by taking $v_{\pm}\rightarrow \widehat{v}_{\pm}$ and $D_{0}\rightarrow\widehat{D}_{0}$. The evolution of the probability density is described by the following system of partial differential equations:
Here α, β are the transition rates between the slowly diffusing and ballistic states. We also assume that there is a uniform distribution c of presynaptic targets along the axon, which can exchange vesicles with the motorcomplex at the rates $k_{\pm}$.
Now suppose that the transition rates α, β are fast compared to the exchange rates $k_{\pm}$ and the effective displacement rates of the complex on a fundamental microscopic lengthscale such as the size of a synaptic target ($l\sim1~\mu\mbox{m}$). Following Sect. 2.2, we can then use a QSS diffusion approximation to derive an advection–diffusion equation for the total probability densities
That is, we obtain the equations
and
where
and
Here
are the stationary probabilities of the threestate Markov process describing transitions between the motile states $n=0$ and $n=\pm$, respectively. We have also absorbed a factor $\rho_{0}$ into $k_{\pm}$.
To investigate how the above form of intracellular transport can lead to synaptic democracy, we consider a population of identical, noninteracting motor complexes. Let $u(x,t)$ and $\widehat{u}(x,t)$ denote the density of motorcomplexes without and with an attached vesicle, respectively. From the reduced equations (5.18a)–(5.18b) we have
and
for $x>0$. In the population model, we have included the degradation terms γu and γû, which account for the fact that motorcomplexes may dysfunction and no longer exchange cargo with synaptic targets. Equations (5.20a)–(5.20b) are supplemented by the following boundary conditions at $x=0$:
where $J(u)= D\partial_{x} u+ vu$ etc. That is, motorcomplexes without and with cargo are injected at the somatic end $x=0$ at constant rates $J_{0}$, and $\widehat{J}_{0}$, respectively. It is important to emphasize that the injected motor complexes are not necessarily newly synthesized from the cell body. For it has been found experimentally that motorcomplexes recycle between the distal and somatic ends of the soma [96, 148]. In the case of a finite axon, we could model recycling by imposing an absorbing boundary condition at the distal end and reinjecting the distal flux into the somatic end. Since most of these complexes would be without a vesicle, this would mainly contribute to $J_{0}$. Moreover, if the axon is much longer than the range of vesicular delivery necessary to supply en passant synapses, then the effects of the absorbing boundary can be ignored, and we can treat the axon as semiinfinite. Finally, at the population level, the concentration of vesicles within the presynaptic targets is no longer constant, that is, $c=c(x,t)$ with
We have also allowed for the possibility that synaptic vesicles degrade at a rate $\gamma_{c}$.
Let us begin by considering the case $k_{}>0$ (reversible delivery) and $\gamma_{c}=0$ (no vesicular degeneration); the distribution c of presynaptic vesicles will remain bounded, provided that $J_{0}>0$. Equation (5.21) implies that, at steady state,
Then substituting equation (5.22) into the steadystate versions of equations (5.20a)–(5.20b) gives
and
Combining with equation (5.22) then yields the following result for the steadystate density of synaptic vesicles:
where
In particular, if the transport properties of the motorcomplex are independent of whether or not a vesicle is bound ($v=\widehat{v}$, $D=\widehat{D}$), then $\xi=\widehat{\xi}$, and we have a uniform vesicle distribution
To further explore the ability of this model to produce a democratic cargo distribution, equation (5.20a)–(5.20b) can be solved numerically for a range of parameter values. Following [20], suppose that $\gamma_{c}$ is small (relative to $k_{\pm}$) but nonzero and consider how the normalized distribution $c(x)/c(0)$ varies with $\phi\equiv k_{}/\gamma_{c}$, which determines the proportion of vesicles that are recycled into the system after leaving the targets. Figure 19 displays the normalized concentration profiles for a variety of $k_{}/\gamma_{c}$ values with either $J_{0}=\widehat{J}_{0}$ or $J_{0}=0$. (The domain size is taken to be sufficiently large to avoid boundary effects.) It can be seen that when $J_{0}>0$, the length scale over which nonexponential decay occurs is an increasing function of $k_{}/\gamma_{c}$, whereas when $J_{0}=0$, the model fails to distribute cargo across a substantial region of the axon. Hence an additional component of a delivery mechanism that includes recapture is a source of motors which are able to receive vesicles. It should be emphasized that this does not require additional motors to be synthesized in the soma; instead, motors may return to the beginning of the axon after delivering their cargo. From the perspective of synaptic democracy it seems desirable to maximize $k_{}$; however, increasing the recapture rate decreases the efficiency of the delivery mechanism and can result in a overall loss of vesicles due to motor degradation.
This mechanism for synaptic democracy appears to be quite robust. For example, it can be extended to the case where each motor carries a vesicular aggregate rather than a single vesicle, assuming that only one vesicle can be exchanged with a target at any one time [13]. The effects of reversible vesicular delivery also persist when exclusion effects between between motorcargo complexes are taken into account [16] and when higherdimensional cell geometries are considered [78].
Phase Reduction of Stochastic Hybrid Oscillators
In Sects. 2.3 and 3 we assumed that, in the adiabatic limit $\varepsilon \rightarrow0$, the resulting deterministic dynamical system exhibited bistability, and we explored how random switching of the associated PDMP for small ε can lead to noiseinduced transitions between metastable states. In this section, we assume that the deterministic system supports a stable limit cycle so that the corresponding PDMP acts as a stochastic limit cycle oscillator, at least in the weak noise regime. There is an enormous literature on the analysis of stochastic limit cycle oscillators for SDEs (for recent surveys, see the reviews [3, 47, 105]). On the other hand, as far as we are aware, there has been very little numerical or analytical work on limit cycle oscillations in PDMPs. A few notable exceptions are [21, 27, 52, 89, 137]. One possible approach would be to carry out a QSS diffusion approximation of the PDMP along the lines of Sect. 2.2 and then use stochastic phase reduction methods developed for SDEs. In this section, we review an alternative, variational method that deals directly with the PDMP [21], thus avoiding additional errors arising from the diffusion approximation. Another major advantage of the variational method is that it allows us to obtain rigorous exponential bounds on the expected time to escape from a neighborhood of the limit cycle [21, 22].
Let us first briefly consider SDEs. Suppose that a deterministic smooth dynamical system $\dot{x}=F(x)$, $x \in \mathbb {R}^{d}$, supports a limit cycle $x(t)=\varPhi(\theta(t))$ of period $\varDelta _{0}$, where $\theta(t)$ is a uniformly rotating phase, $\dot{\theta}=\omega_{0}$, and $\omega_{0}=2\pi/\varDelta _{0}$. The phase is neutrally stable with respect to perturbations along the limit cycle; this reflects invariance of an autonomous dynamical system with respect to time shifts. Now suppose that the dynamical system is perturbed by weak Gaussian noise such that $dX=F(X)\,dt+\sqrt{2\varepsilon } G(X) \, dW(t)$, where $W(t)$ is a ddimensional vector of independent Wiener processes. If the noise amplitude ε is sufficiently small relative to the rate of attraction to the limit cycle, then deviations transverse to the limit cycle are also small (up to some exponentially large stopping time). This suggests that the definition of a phase variable persists in the stochastic setting, and we can derive a stochastic phase equation by decomposing the solution to the SDE according to
with $\beta(t)$ and $v(t)$ corresponding to the phase and amplitude components, respectively. However, there is not a unique way to define the phase β, which reflects the fact that there are different ways of projecting the exact solution onto the limit cycle [7, 21, 65, 87, 147]; see Fig. 20. One wellknown approach is to use the method of isochrons [47, 62, 106, 135, 136, 149]. Recently, a variational method for carrying out the amplitudephase decomposition for SDEs has been developed, which yields exact SDEs for the amplitude and phase [22]. Within the variational framework, different choices of phase correspond to different choices of the inner product space $\mathbb {R}^{d}$. By taking an appropriately weighted Euclidean norm the minimization scheme determined the phase by projecting the full solution on to the limit cycle using Floquet vectors. Hence, in a neighborhood of the limit cycle the phase variable coincided with the isochronal phase [7]. This had the advantage that the amplitude and phase decoupled to leading order. In addition, the exact amplitude and phase equations could be used to derive strong exponential bounds on the growth of transverse fluctuations. It turns out that an analogous variational method can be applied to PDMPs [21], which will be outlined in the remainder of this section.
Suppose that the deterministic dynamical system (2.8), obtained in the adiabatic limit $\varepsilon \rightarrow0$, supports a stable periodic solution $x=\varPhi(\omega_{0} t)$ with $\varPhi(\omega _{0}t)=\varPhi(\omega_{0}[t+\varDelta _{0}])$, where $\omega_{0}=2\pi/\varDelta _{0}$ is the natural frequency of the oscillator. In the state space of the continuous variable, the solution is an isolated attractive trajectory called a limit cycle. The dynamics on the limit cycle can be described by a uniformly rotating phase such that
and $x={\varPhi}(\theta(t))$ with a 2πperiodic function Φ. Note that the phase is neutrally stable with respect to perturbations along the limit cycle—this reflects invariance of an autonomous dynamical system with respect to time shifts. By definition, Φ must satisfy the equation
Differentiating both sides with respect to θ gives
where J̅ is the 2πperiodic Jacobian matrix
One concrete example of a PDMP that supports a limit cycle oscillation in the fast switching limit is a version of the stochastic Morris–Lecar model that has been applied to sodiumbased subthreshold oscillations [27, 145]; the corresponding deterministic model is given by equations (3.5). Numerical solutions of the latter are shown in Fig. 21.
The isochronal phase map has been the most popular means of decomposing the phase of stochastic oscillators evolving according to an SDE (and also studying their synchronization) [3, 47, 105]. Let $\mathscr {U}$ be the neighborhood of the limit cycle consisting of all points that eventually converge to the limit cycle under the deterministic dynamics of (2.8). The isochronal phase map $\varTheta: \mathscr {U} \to\mathbb{S}^{1}$ is defined to be the phase that a point converges to. That is, $\varTheta (y)$ is the unique α such that if $x(0) = y$ and
then $\lim_{t\to\infty} \Vert x(t)  \varPhi(\alpha+t\omega_{0}) \Vert = 0$. Hence, in a neighborhood of the deterministic limit cycle, we have
Now let $\alpha= \varTheta(x)$ for $x(t)$ evolving according to the PDMP (2.1), assuming for the moment that $x(t)\in \mathscr {U}$. From the chain rule of calculus it follows that the isochronal phase evolves according to the piecewise deterministic dynamics
with switching events occurring at the same times $\lbrace t_{k} \rbrace$ as $x(t)$. The gradient of the isochronal phase,
is known as the infinitesimal phase resetting curve. It can be shown that $R(\theta)$ satisfies the adjoint equation [48]
under the normalization condition
As it stands, equation (6.7) is not a closed equation for the isochronal phase, since the righthand side depends on the full set of variables $x(t)$.
Floquet Decomposition
Suppose that we fix a particular realization $\sigma_{T}$ of the Markov chain up to some time T, $\sigma_{T}=\{N(t),0\leq t \leq T\}$. Suppose that there is a finite sequence of jump times $\{t_{1},\ldots, t_{r}\}$ within the time interval $(0,T)$ and let $n_{j}$ be the corresponding discrete state in the interval $(t_{j},t_{j+1})$ with $t_{0}=0$. Introduce the set
We wish to decompose the piecewise deterministic solution $x_{t}$ to the PDMP (2.1) for $t\in{ \mathscr {T}}$ into two components as in equation (6.1):
with $\beta_{t}$ and $v_{t}$ corresponding to the phase and amplitude components, respectively. The phase $\beta_{t}$ and amplitude $v_{t}$ evolve according to a PDMP, involving the vector field $F_{n_{j}}$ in the time intervals $(t_{j},t_{j+1})$, analogous to $x_{t}$; see Fig. 1. (It is notationally convenient to switch from $x(t)$ to $x_{t}$ etc.) However, such a decomposition is not unique unless we impose an additional mathematical constraint. We will adapt a variational principle recently introduced to analyze the dynamics of limit cycles with Gaussian noise [21]. To construct the variational principle, we first introduce an appropriate weighted norm on $\mathbb {R}^{d}$, based on a Floquet decomposition.
For any $0 \leq t$, define $\varPi(t) \in \mathbb {R}^{d\times d}$ to be the fundamental matrix for the following ODE:
where $A(t)=\overline{J}(\omega_{0} t)$. That is, $\varPi(t):= ( z_{1}(t)  z_{2}(t)  \cdotsz_{d}(t) )$, where $z_{i}(t)$ satisfies (6.12), and $\lbrace z_{i}(0) \rbrace_{i=1}^{d}$ is an orthogonal basis for $\mathbb {R}^{d}$. Floquet theory states that there exists a diagonal matrix $\mathscr {S}=\operatorname{diag}(\nu_{1},\ldots,\nu_{d})$ whose diagonal entries are the Floquet characteristic exponents such that
with $P(\theta)$ a 2πperiodic matrix whose first column is proportional to $\varPhi'(\omega_{0}t)$, and $\nu_{1} = 0$. That is, $P(\theta)^{1}\varPhi'(\theta) =c_{0}\mathbf{e}$ with $\mathbf{e}_{j}=\delta _{1,j}$ and $c_{0}$ an arbitrary constant; we set $c_{0}=1$ for convenience. To simplify the following notation, we will assume throughout this paper that the Floquet multipliers are real and hence $P(\theta)$ is a real matrix. We can readily generalize these results to the case that $\mathscr {S}$ is complex. The limit cycle is taken to be stable, meaning that, for a constant $b > 0$ and all $2\leq i \leq d$, we have $\nu_{i} \leq b$. Furthermore, $P^{1}(\theta)$ exists for all θ, since $\varPi ^{1}(t)$ exists for all t.
The above Floquet decomposition motivates the following weighted inner product: For any $\theta\in \mathbb {R}$, we denote the standard Euclidean dot product on $\mathbb{R}^{d}$ by $\langle\cdot, \cdot\rangle$,
and $\Vert x \Vert _{\theta}= \sqrt{\langle x,x\rangle_{\theta }}$. In the case of SDEs, it has been shown that this choice of weighting yields a leading order separation of the phase from the amplitude and facilitates strong bounds on the growth of $v_{t}$ [21]. The former is a consequence of the fact that the matrix $P^{1}(\theta )$ generates a coordination transformation in which the phase in a neighborhood of the limit cycle coincides with the asymptotic phase defined using isochrons (see also [7]). In particular, we can show that the PRC $R(\theta)$ is related to the tangent vector $\varPhi'(\theta)$ according to [21]
where
Defining the Piecewise Deterministic Phase Using a Variational Principle
We can now state the variational principle for the stochastic phase $\beta_{t}$ [21]. First, we consider a variational problem for an arbitrary prescribed function $\theta_{t}$ (not to be confused with the phase on the limit cycle), which specifies the weighted Euclidean norm. Given $\theta_{t}$, we determine $\beta_{t}$ for $t \in{ \mathscr {T}}$ by requiring $\beta_{t}=\varphi_{t}(\theta_{t})$, where $\varphi_{t}(\theta_{t})$ is a local minimum of the following variational problem:
with ${\mathscr {N}} (\varphi_{t}(\theta_{t}) )$ denoting a sufficiently small neighborhood of $\varphi_{t} (\theta_{t} )$. The minimization scheme is based on the orthogonal projection of the solution onto the limit cycle with respect to the weighted Euclidean norm. We will derive an exact SDE for $\beta_{t}$ (up to some stopping time) by considering the first derivative
At the minimum,
We then determine $\theta_{t}$ (and hence $\beta_{t}$) selfconsistently by imposing the condition $\theta_{t} = \varphi_{t}(\theta_{t})=\beta_{t}$. It follows that the stochastic phase $\beta_{t}$ satisfies the implicit equation
It will be seen that, up to a stopping time τ, there exists a unique continuous solution to this equation. Define $\mathfrak {M}(z,\varphi) \in \mathbb {R}$ as
Assume that initially $\mathfrak {M}(u_{0},\beta_{0})>0$. We then seek a PDMP for $\beta_{t}$ that holds for all times less than the stopping time
The implicit function theorem guarantees that a unique continuous $\beta_{t}$ exists until this time.
To derive the PDMP for $\beta_{t}$, we consider the equation
with $x_{t}$ evolving according to the PDMP (2.1). From the definition of $\mathscr {G}(x_{t},\beta_{t})$ it follows that
Rearranging, we find that the phase $\beta_{t}$ evolves according to the exact, but implicit, PDMP
with $n=n_{j}$ for $t \in(t_{j},t_{j+1})$. Finally, recalling that the amplitude term $v_{t}$ satisfies $\sqrt{ \varepsilon }v_{t}=x_{t}\varPhi_{\beta_{t}}$, we have
Weak Noise Limit
Equation (6.24) is a rigorous, exact implicit equation for the phase $\beta_{t}$. We can derive an explicit equation for $\beta_{t}$ by carrying out a perturbation analysis in the weak noise limit. Let $0 <\varepsilon \ll1$ and set $x_{t}=\varPhi(\beta_{t})$ on the righthand side of (6.24), that is, $v_{t}=0$. Writing $\beta_{t}\approx\theta _{t}$, we have the piecewise deterministic phase equation
after using ${\mathfrak {M}}(\varPhi(\theta),\theta)=\mathfrak {M}_{0}$ and equation (6.14). The last line follows from the observation
Hence, a phase reduction of the PDMP (2.1) yields a PDMP for the phase $\theta_{t}$. Of course, analogously to the phase reduction of SDEs, there are errors due to the fact that we have ignored $O(\varepsilon )$ terms arising from amplitudephase coupling; see below. This leads to deviations of the phase $\theta_{t}$ from the exact variational phase $\beta_{t}$ over $O(1/\varepsilon )$ timescales.
In Fig. 22, we show results of numerical simulations of the stochastic ML model for $N=10$ and $\varepsilon =0.01$ with other parameters as in Fig. 21. We compare solutions of the explicit phase equation (6.26) with the exact phase defined using the variational principle; see Eq. (6.24). We also show sample trajectories for $(v,w)$. It can be seen that initially the phases are very close and then very slowly drift apart as noise accumulates. The diffusive nature of the drift in both phases can be clearly seen, with the typical deviation of the phase from $\omega_{0} t$ increasing in time.
Decay of Amplitude Vector
If we are interested in higherorder contributions to the phase equation, then it is necessary to consider the coupling between the phase and amplitude in both the continuous dynamics and the discrete switching process. Hence, the phase equation (6.26) will only be a reasonable approximation for small ε if the dynamics remains within some attracting neighborhood of the limit cycle, that is, the amplitude remains small. Since the amplitude $v_{t}$ satisfies $\sqrt{ \varepsilon }v_{t}=x_{t}\varPhi_{\beta_{t}}$, we have
Now define $w_{t} = \sqrt{ \varepsilon }P(\beta_{t})^{1}v_{t}$. Using the fact that $\dot{P}\omega_{0} = J(t)P(t)P(t)\mathscr {S}$, we find that
In the fast switching limit (as $\varepsilon \to0$), we can show that the dynamics of $\Vert w_{t} \Vert ^{2}$ decays to leading order [21]. That is, there exists a constant C such that the probability that the expected time to leave an $O(a)$ neighborhood of the limit cycle is less than T scales as $T\exp ({Ca}/{\varepsilon } )$. An interesting difference between this bound and the corresponding one obtained for SDEs [22] is that in the latter the bound is of the form $T\exp ({Cba}/{\varepsilon } )$, where b is the rate of decay toward the limit cycle. In other words, in the SDE case, the bound is still powerful in the large ε case, as long as $b\varepsilon ^{1} \gg1$, that is, as long as the decay toward the limit cycle dominates the noise. However, this no longer holds in the PDMP case. Now, if ε is large, then the most likely way that the system can escape the limit cycle is that in stays in any particular state for too long without jumping, and the time that it stays in one state is not particularly affected by b (in most cases).
Synchronization of Hybrid Oscillators
As we have outlined previously, it is possible to apply phase reduction techniques to PDMPs that support a limit cycle in the fast switching limit [21]. One of the important consequences of this reduction is that it provides a framework for studying the synchronization of a population of PDMP oscillators, either through direct coupling or via a common noise source. In the case of SDEs, there there have been considerable recent interest in noiseinduced phase synchronization [47, 62, 106, 135, 136, 149]. This concerns the observation that a population of oscillators can be synchronized by a randomly fluctuating external input applied globally to all of the oscillators, even if there are no interactions between the oscillators. Evidence for such an effect has been found in experimental studies of neural oscillations in the olfactory bulb [59] and the synchronization of synthetic genetic oscillators [151]. A related phenomenon is the reproducibility of a dynamical system response when repetitively driven by the same fluctuating input, even though initial conditions vary across trials. One example is the spiketime reliability of single neurons [60, 98].
Most studies of noiseinduced synchronization take the oscillators to be driven by common Gaussian noise. Typically, phase synchronization is established by constructing the Lyapunov exponent for the dynamics of the phase difference between a pair of oscillators and averaging with respect to the noise. If the averaged Lyapunov exponent is negative definite, then the phase difference is expected to decay to zero in the large time limit, establishing phase synchronization. However, it has also been shown that common Poissondistributed random impulses, dichotomous or telegrapher noise, and other types of noise generally induce synchronization of limitcycle oscillators [63, 104, 107]. Consider, in particular, the case of an additive dichotomous noise signal $I(t)$ driving a population of M identical noninteracting oscillators according to the system of equations $\dot{x}_{j}=F(x_{j})+I(t)$, where $x_{j}\in \mathbb {R}^{d}$ is the state of the jth oscillator, $j=1,\ldots,M$ [104]; see Fig. 23. Here $I(t)$ switches between two values $I_{0}$ and $I_{1}$ at random times generated by a twostate Markov chain [5]. (In the case of the classical ML model, $I(t)$ could represent a randomly switching external current.) That is, $I(t)=I_{0}(1N(t))+I_{1}N(t)$ for $N(t)\in\{0,1\}$, with the time T between switching events taken to be exponentially distributed with mean switching time τ. Suppose that each oscillator supports a stable limit cycle for each of the two input values $I_{0}$ and $I_{1}$. It follows that the internal state of each oscillator randomly jumps between the two limit cycles. Nagai et al. [104] show that in the slow switching limit (large τ), the dynamics can be described by random phase maps. Moreover, if the phase maps are monotonic, then the associated Lyapunov exponent is generally negative, and phase synchronization is stable.
More generally, let $N(t) \in\varGamma\equiv\{0,\ldots,N_{0}1\}$ denote the state of a randomly switching environment. When the environmental state is $N(t)=n$, each oscillator $x_{i}(t)$ evolves according to the piecewise deterministic differential equation $\dot{x}_{i}=F_{n}(x_{i})$ for $i=1,\ldots,M$. The additive dichotomous noise case is recovered by taking $N_{0}=2$ and $F_{n}(x)=F(x)+I_{n}$. In the slow switching limit, we can generalize the approach of Nagai et al. [104] by assuming that each of the vector fields $F_{n}(x_{i})$, $n\in\varGamma$, supports a stable limit cycle and constructing the associated random phase maps. Here we briefly discuss the fast switching regime, assuming that in the adiabatic limit $\varepsilon \rightarrow0$, the resulting deterministic system $\dot {x}_{i}=\overline{F}(x_{i})$ supports a stable limit cycle. Since there is no coupling or remaining external drive to the oscillators in this limit, their phases are uncorrelated. This then raises the issue as to whether or not phase synchronization occurs when $\varepsilon >0$.
Again, one approach would be to carry out a QSS analysis along the lines of Sect. 2.2, in which each oscillator is approximated by an SDE with a common Gaussian input. We could then adapt previous work on the phase reduction of stochastic limit cycle oscillators [62, 106, 135, 136] and thus establish that phase synchronization occurs under the diffusion approximation. However, the QSS approximation is only intended to be accurate over timescales that are longer than $O(\varepsilon )$. Hence, it is unclear whether or not the associated Lyapunov exponent is accurate, since it is obtained from averaging the fluctuations in the noise over infinitesimally small timescales. Therefore, it would be interesting to derive a more accurate expression for the Lyapunov exponent by working directly with an exact implicit equation for the phase dynamics such as the population analog of equation (6.24).
Conclusion
In recent years, it has become clear that stochastic switching processes are prevalent in a wide range of biological systems. Such processes are typically modeled in terms of stochastic hybrid systems, also known as PDMPs. In this review, we provided a basic introduction to stochastic hybrid systems and illustrated the theory by considering applications to cellular neuroscience. (In a companion review paper, we focus on applications to switching gene regulatory networks [14].) We showed that although the theory of stochastic hybrid systems is underdeveloped compared to SDEs and discrete Markov processes, analogous techniques can be applied, including large deviations and WKB methods, diffusion approximations, and phase reduction methods. We end by listing several outstanding issues that are worthy of further exploration.

1.
Solving the stationary version of the CK equation (2.6) for higherdimensional stochastic hybrid systems with multiple discrete states; developing an ergodic theory of PDMPs. (See also the recent paper by Lawley et al. [4])

2.
Calculating the Perron eigenvalue (Hamiltonian) of equation (2.39) for a wider range of models; currently, only a few exact solutions are known such as the ion channel model of Sect. 3; extending the theory of metastability to PDMPs with infinite Markov chains, where the Perron–Frobenius theorem does not necessarily hold.

3.
Developing more detailed biophysical models of the transfer of vesicles between motorcomplexes and synaptic targets; identifying local signaling mechanisms for synaptic targeting; incorporating the contribution of intracellular stores; coupling mRNA transport to longterm synaptic plasticity.

4.
Solving the diffusion equation with randomly switching boundary conditions when the switching of a gate depends, for example, on the local particle concentration; solving higherdimensional boundary value problems; analyzing higherorder moments of the stochastic concentration.

5.
Analyzing the synchronization of stochastic hybrid oscillators driven by a common environmental switching process; extending the theory to take into account a partial dependence of the switching process on the continuous dynamics of each oscillator.

6.
Modeling synaptically coupled neural networks as a stochastic hybrid system, where the individual spikes of a neural population are treated as the discrete process, and the synaptic currents driving the neurons to fire correspond to the continuous process. So far, stochastic hybrid neural networks are phenomenologically based [11, 24]. Can such networks be derived from a more fundamental microscopic theory, and is there a way of distinguishing the output activity of hybrid networks from those driven, for example, by Gaussian noise?
References
 1.
Alberts B, Johnson A, Lewis J, Raff M, Walter KR. Molecular biology of the cell. 5th ed. New York: Garland; 2008.
 2.
Anderson DF, Ermentrout GB, Thomas PJ. Stochastic representations of ion channel kinetics and exact stochastic simulation of neuronal dynamics. J Comput Neurosci. 2015;38:67–82.
 3.
Ashwin P, Coombes S, Nicks R. Mathematical frameworks for oscillatory network dynamics in neuroscience. J Math Neurosci. 2016;6:2.
 4.
Bakhtin Y, Hurth T, Lawley SD, Mattingly JC. Smooth invariant densities for random switching on the torus. Preprint. arXiv:1708.01390 (2017).
 5.
Bena I. Dichotomous Markov noise: exact results for outofequilibrium systems. Int J Mod Phys B. 2006;20:2825–88.
 6.
Blum J, Reed MC. A model for slow axonal transport and its application to neurofilamentous neuropathies. Cell Motil Cytoskelet. 1989;12:53–65.
 7.
Bonnin M. Amplitude and phase dynamics of noisy oscillators. Int J Circuit Theory Appl. 2017;45:636–59.
 8.
Bramham CR, Wells DG. Dendritic mRNA: transport, translation and function. Nat Rev Neurosci. 2007;8:776–89.
 9.
Bredt DS, Nicoll RA. AMPA receptor trafficking at excitatory synapses. Neuron. 2003;40:361–79.
 10.
Bressloff PC. Stochastic processes in cell biology. Berlin: Springer; 2014.
 11.
Bressloff PC. Pathintegral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks. J Math Neurosci. 2015;5:4.
 12.
Bressloff PC. Diffusion in cells with stochasticallygated gap junctions. SIAM J Appl Math. 2016;76:1658–82.
 13.
Bressloff PC. Aggregationfragmentation model of vesicular transport in neurons. J Phys A. 2016;49:145601.
 14.
Bressloff PC. Topical review: stochastic switching in biology: from genotype to phenotype. J Phys A. 2017;50:133001.
 15.
Bressloff PC, Faugeras O. On the Hamiltonian structure of large deviations in stochastic hybrid system. J Stat Mech. 2017;033206.
 16.
Bressloff PC, Karamched B. Model of reversible vesicular transport with exclusion. J Phys A. 2016;49:345602.
 17.
Bressloff PC, Lawley SD. Escape from subcellular domains with randomly switching boundaries. Multiscale Model Simul. 2015;13:1420–45.
 18.
Bressloff PC, Lawley SD. Moment equations for a piecewise deterministic PDE. J Phys A. 2015;48:105001.
 19.
Bressloff PC, Lawley SD. Diffusion on a tree with stochasticallygated nodes. J Phys A. 2016;49:245601.
 20.
Bressloff PC, Levien E. Synaptic democracy and active intracellular transport in axons. Phys Rev Lett. 2015;114:168101.
 21.
Bressloff PC, MaClaurin JN. A variational method for analyzing limit cycle oscillations in stochastic hybrid systems. Chaos. 2018;28:063105.
 22.
Bressloff PC, MaClaurin JN. A variational method for analyzing stochastic limit cycle oscillators. SIAM J Appl Math. In press 2018.
 23.
Bressloff PC, Newby JM. Stochastic models of intracellular transport. Rev Mod Phys. 2013;85:135–96.
 24.
Bressloff PC, Newby JM. Metastability in a stochastic neural network modeled as a jump velocity Markov process. SIAM J Appl Dyn Syst. 2013;12:1394–435.
 25.
Bressloff PC, Newby JM. Pathintegrals and large deviations in stochastic hybrid systems. Phys Rev E. 2014;89:042701.
 26.
Bressloff PC, Newby JM. Stochastic hybrid model of spontaneous dendritic NMDA spikes. Phys Biol. 2014;11:016006.
 27.
Brooks HA, Bressloff PC. Quasicycles in the stochastic hybrid Morris–Lecar neural model. Phys Rev E. 2015;92:012704.
 28.
Brown A. Slow axonal transport: stop and go traffic in the axon. Nat Rev Mol Cell Biol. 2000;1:153–6.
 29.
Brown A. Axonal transport of membranous and nonmembranous cargoes: a unified perspective. J Cell Biol. 2003;160:817–21.
 30.
Buckwar E, Riedler MG. An exact stochastic hybrid model of excitable membranes including spatiotemporal evolution. J Math Biol. 2011;63:1051–93.
 31.
Bukauskas FK, Verselis VK. Gap junction channel gating. Biochim Biophys Acta. 2004;1662:42–60.
 32.
Chow CC, White JA. Spontaneous action potentials due to channel fluctuations. Biophys J. 1996;71:3013–21.
 33.
Coggan JS, Bartol TM, Esquenazi E, Stiles JR, Lamont S, Martone ME, Berg DK, Ellisman MH, Sejnowski TJ. Evidence for ectopic neurotransmission at a neuronal synapse. Science. 2005;309:446–51.
 34.
Collinridge GL, Isaac JTR, Wang YT. Receptor trafficking and synaptic plasticity. Nat Rev Neurosci. 2004;5:952–62.
 35.
Connors BW, Long MA. Electrical synapses in the mammalian brain. Annu Rev Neurosci. 2004;27:393–418.
 36.
Damm EM, Pelkmans L. Systems biology of virus entry in mammalian cells. Cell Microbiol. 2006;8:1219–27.
 37.
Davis MHA. Piecewisedeterministic Markov processes: a general class of nondiffusion stochastic models. J R Stat Soc, Ser B, Methodol. 1984;46:353–88.
 38.
de Vos KJ, Grierson AJ, Ackerley S, Miller CCJ. Role of axonal transport in neurodegenerative diseases. Annu Rev Neurosci. 2008;31:151–73.
 39.
Dembo A, Large ZO. Deviations: techniques and applications. 2nd ed. New York: Springer; 2004.
 40.
Doi M. Second quantization representation for classical manyparticle systems. J Phys A. 1976;9:1465–77.
 41.
Doi M. Stochastic theory of diffusion controlled reactions. J Phys A. 1976;9:1479–95.
 42.
Driver JW, Rodgers AR, Jamison DK, Das RK, Kolomeisky AB, Diehl MR. Coupling between motor proteins determines dynamic behavior of motor protein assemblies. Phys Chem Chem Phys. 2010;12:10398–405.
 43.
Dykman MI, Mori E, Ross J, Hunt PM. Large fluctuations and optimal paths in chemical kinetics. J Chem Phys A. 1994;100:5735–50.
 44.
Dynes JL, Steward O. Dynamics of bidirectional transport of arc mRNA in neuronal dendrites. J Comp Neurol. 2007;500:433–47.
 45.
Elgart V, Kamenev A. Rare event statistics in reaction–diffusion systems. Phys Rev E. 2004;70:041106.
 46.
Ermentrout GB. Simplifying and reducing complex models. In: Computational modeling of genetic and biochemical networks. Cambridge: MIT Press; 2001. p. 307–23.
 47.
Ermentrout GB. Noisy oscillators. In: Laing CR, Lord GJ, editors. Stochastic methods in neuroscience. Oxford: Oxford University Press; 2009.
 48.
Ermentrout GB, Terman D. Mathematical foundations of neuroscience. New York: Springer; 2010.
 49.
Escudero C, Kamanev A. Switching rates of multistep reactions. Phys Rev E. 2009;79:041149.
 50.
Evans WJ, Martin PE. Gap junctions: structure and function. Mol Membr Biol. 2002;19:121–36.
 51.
Faggionato A, Gabrielli D, Crivellari M. Averaging and large deviation principles for fullycoupled piecewise deterministic Markov processes and applications to molecular motors. Markov Process Relat Fields. 2010;16:497–548.
 52.
Feng H, Han B, Wang J. Landscape and global stability of nonadiabatic and adiabatic oscillations in a gene network. Biophys J. 2012;102:1001–10.
 53.
Feng J, Kurtz TG. Large deviations for stochastic processes. Providence: Am. Math. Soc.; 2006.
 54.
Fox RF, Lu YN. Emergent collective behavior in large numbers of globally coupled independent stochastic ion channels. Phys Rev E. 1994;49:3421–31.
 55.
Freidlin MI, Wentzell AD. Random perturbations of dynamical systems. New York: Springer; 1998.
 56.
Friedman A, Craciun G. A model of intracellular transport of particles in an axon. J Math Biol. 2005;51:217–46.
 57.
Friedman A, Craciun G. Approximate traveling waves in linear reactionhyperbolic equations. SIAM J Math Anal. 2006;38:741–58.
 58.
Fuxe K, Dahlstrom AB, Jonsson G, Marcellino D, Guescini M, Dam M, Manger P, Agnati L. The discovery of central monoamine neurons gave volume transmission to the wired brain. Prog Neurobiol. 2010;90:82–100.
 59.
Galan RF, Ermentrout GB, Urban NN. Optimal time scale for spiketime reliability: theory, simulations and experiments. J Neurophysiol. 2008;99:277–83.
 60.
Galan RF, FourcaudTrocme N, Ermentrout GB, Urban NN. Correlationinduced synchronization of oscillations in olfactory bulb neurons. J Neurosci. 2006;26:3646–55.
 61.
Gardiner CW. Handbook of stochastic methods. 4th ed. Berlin: Springer; 2009.
 62.
Goldobin DS, Pikovsky A. Synchronization and desynchronization of selfsustained oscillators by common noise. Phys Rev E. 2005;71:045201.
 63.
Goldobin DS, Teramae J, Nakao H, Ermentrout GB. Dynamics of limitcycle oscillators subject to general noise. Phys Rev Lett. 2010;105:154101.
 64.
Goldwyn JH, SheaBrown E. The what and where of adding channel noise to the Hodgkin–Huxley equations. PLoS Comput Biol. 2011;7:e1002247.
 65.
Gonze D, Halloy J, Gaspard P. Biochemical clocks and molecular noise: theoretical study of robustness factors. J Chem Phys. 2002;116:10997–1010.
 66.
Goodenough DA, Paul DL. Gap junctions. Cold Spring Harb Perspect Biol. 2009;1:a002576.
 67.
Gumy LF, Hoogenraad CC. Local mechanisms regulating selective cargo entry and longrange trafficking in axons. Curr Opin Neurubiol. 2018;51:23–8.
 68.
Gumy LF, Katrukha EA, Grigoriev I, Jaarsma D, Kapitein LC, Akhmanova A, Hoogenraad CC. MAP2 defines a preaxonal filtering zone to regulate KIF1 versus KIF5dependent cargo. Neuron. 2017;94:347–62.
 69.
Hanggi P, Grabert H, Talkner P, Thomas H. Bistable systems: master equation versus Fokker–Planck modeling. Phys Rev A. 1984;29:371–8.
 70.
Hanggi P, Talkner P, Borkovec M. Reaction rate theory: fifty years after Kramers. Rev Mod Phys. 1990;62:251–341.
 71.
Hausser M. Synaptic function: dendritic democracy. Curr Biol 2001;11:R10–R12.
 72.
Henley JM, Barker EA, Glebov OO. Routes, destinations and delays: recent advances in AMPA receptor trafficking. Trends Neurosci. 2011;34:258–68.
 73.
Hillen T, Othmer H. The diffusion limit of transport equations derived from velocityjump processes. SIAM J Appl Math. 2000;61:751–75.
 74.
Hillen T, Swan A. The diffusion limit of transport equations in biology. In: Preziosi L, et al., editors. Mathematical models and methods for living systems. 2016. p. 3–129.
 75.
Hinch R, Chapman SJ. Exponentially slow transitions on a Markov chain: the frequency of calcium sparks. Eur J Appl Math. 2005;16:427–46.
 76.
Hodgkin AL, Huxley AF. A quantitative description of membrane and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–44.
 77.
Jung P, Brown A. Modeling the slowing of neurofilament transport along the mouse sciatic nerve. Phys Biol. 2009;6:046002.
 78.
Karamched B, Bressloff PC. Effects of geometry on reversible vesicular transport. J Phys A. 2017;50:055601.
 79.
Karmakar R, Bose I. Graded and binary responses in stochastic gene expression. Phys Biol. 2004;1:197–204.
 80.
Keener JP, Newby JM. Perturbation analysis of spontaneous action potential initiation by stochastic ion channels. Phys Rev E. 2011;84:011918.
 81.
Keener JP, Sneyd J. Mathematical physiology I: cellular physiology. 2nd ed. New York: Springer; 2009.
 82.
Kelleher RL, Govindarajan A, Tonegawa S. Translational regulatory mechanisms in persistent forms of synaptic plasticity. Neuron. 2004;44:59–73.
 83.
Kepler TB, Elston TC. Stochasticity in transcriptional regulation: origins, consequences, and mathematical representations. Biophys J. 2001;81:3116–36.
 84.
Kifer Y. Large deviations and adiabatic transitions for dynamical systems and Markov processes in fully coupled averaging. Mem. Am. Math. Soc.. 2009;201:944.
 85.
Knessl C, Matkowsky BJ, Schuss Z, Tier C. An asymptotic theory of large deviations for Markov jump processes. SIAM J Appl Math. 1985;46:1006–28.
 86.
Knowles RB, Sabry JH, Martone ME, Deerinck TJ, Ellisman MH, Bassell GJ, Kosik KS. Translocation of RNA granules in living neurons. J Neurosci. 1996;16:7812–20.
 87.
Koeppl H, Hafner M, Ganguly A, Mehrotra A. Deterministic characterization of phase noise in biomolecular oscillators. Phys Biol. 2011;8:055008.
 88.
Kosik KS, Joachim CL, Selkoe DJ. Microtubuleassociated protein tau (tau) is a major antigenic component of paired helical filaments in Alzheimer disease. Proc Natl Acad Sci USA. 1886;83:4044–8.
 89.
Labavic D, Nagel H, Janke W, MeyerOrtmanns H. Caveats in modeling a common motif in genetic circuits. Phys Rev E. 2013;87:062706.
 90.
Lawley SD. Boundary value problems for statistics of diffusion in a randomly switching environment: PDE and SDE perspectives. SIAM J Appl Dyn Syst. 2016;15:1410–33.
 91.
Lawley SD, Best J, Reed MC. Neurotransmitter concentrations in the presence of neural switching in one dimension. Discrete Contin Dyn Syst, Ser B. 2016;21:2255–73.
 92.
Lawley SD, Mattingly JC, Reed MC. Stochastic switching in infinite dimensions with applications to random parabolic PDEs. SIAM J Math Anal. 2015;47:3035–63.
 93.
Li Y, Jung P, Brown A. Axonal transport of neurofilaments: a single population of intermittently moving polymers. J Neurosci. 2012;32:746–58.
 94.
Lu T, Shen T, Zong C, Hasty J, Wolynes PG. Statistics of cellular signal transduction as a race to the nucleus by multiple random walkers in compartment/phosphorylation space. Proc Natl Acad Sci USA. 2006;103:16752–7.
 95.
Maas C, Belgardt D, Lee HK, Heisler FF, LappeSiefke C, Magiera MM, van Dijk J, Hausrat TJ, Janke C, Kneussel M. Synaptic activation modifies microtubules underlying transport of postsynaptic cargo. Proc Natl Acad Sci USA. 2009;106:8731–6.
 96.
Maeder CI, SanMiguel A, Wu EY, Lu H, Shen K. In vivo neuronwide analysis of synaptic vesicle precursor trafficking. Traffic. 2014;15:273–91.
 97.
Maier RS, Stein DL. Limiting exit location distribution in the stochastic exit problem. SIAM J Appl Math. 1997;57:752–90.
 98.
Mainen ZF, Sejnowski TJ. Reliability of spike timing in neocortical neurons. Science. 1995;268:1503–6.
 99.
Matkowsky BJ, Schuss Z. The exit problem for randomly perturbed dynamical systems. SIAM J Appl Math. 1977;33:365–82.
 100.
Morris C, Lecar H. Voltage oscillations in the barnacle giant muscle fiber. J Biophys. 1981;35:193–213.
 101.
Muller MJI, Klumpp S, Lipowsky R. Tugofwar as a cooperative mechanism for bidirectional cargo transport by molecular motors. Proc Natl Acad Sci USA. 2008;105:4609–14.
 102.
Muller MJI, Klumpp S, Lipowsky R. Motility states of molecular motors engaged in a stochastic tugofwar. J Stat Phys. 2008;133:1059–81.
 103.
Naeh T, Klosek MM, Matkowsky BJ, Schuss Z. A direct approach to the exit problem. SIAM J Appl Math. 1990;50:595–627.
 104.
Nagai K, Nakao H, Tsubo Y. Synchrony of neural oscillators induced by random telegraphic currents. Phys Rev E. 2005;71:036217.
 105.
Nakao H. Phase reduction approach to synchronization of nonlinear oscillators. Contemp Phys. 2016;57:188–214.
 106.
Nakao H, Arai K, Kawamura Y. Noiseinduced synchronization and clustering in ensembles of uncoupled limit cycle oscillators. Phys Rev Lett. 2007;98:184101.
 107.
Nakao H, Arai K, Nagai K, Tsubo Y, Kuramoto Y. Synchrony of limitcycle oscillators induced by random external impulses. Phys Rev E. 2005;72:026220.
 108.
Newby JM. Isolating intrinsic noise sources in a stochastic genetic switch. Phys Biol. 2012;9:026002.
 109.
Newby JM. Spontaneous excitability in the Morris–Lecar model with ion channel noise. SIAM J Appl Dyn Syst. 2014;13:1756–91.
 110.
Newby JM. Bistable switching asymptotics for the self regulating gene. J Phys A. 2015;48:185001.
 111.
Newby JM, Bressloff PC. Directed intermittent search for a hidden target on a dendritic tree. Phys Rev E. 2009;80:021913.
 112.
Newby JM, Bressloff PC. Quasisteady state reduction of molecularbased models of directed intermittent search. Bull Math Biol. 2010;72:1840–66.
 113.
Newby JM, Bressloff PC. Local synaptic signaling enhances the stochastic transport of motordriven cargo in neurons. Phys Biol. 2010;7:036004.
 114.
Newby JM, Bressloff PC, Keeener JP. Breakdown of fastslow analysis in an excitable system with channel noise. Phys Rev Lett. 2013;111:128101.
 115.
Newby JM, Keener JP. An asymptotic analysis of the spatially inhomogeneous velocityjump process. SIAM J Multiscale Model Simul. 2011;9:735–65.
 116.
Othmer HG, Hillen T. The diffusion limit of transport equations II: chemotaxis equations. SIAM J Appl Math. 2002;62:1222–50.
 117.
Pakdaman K, Thieullen M, Wainrib G. Fluid limit theorems for stochastic hybrid systems with application to neuron models. Adv Appl Probab. 2010;42:761–94.
 118.
Pakdaman K, Thieullen M, Wainrib G. Asymptotic expansion and central limit theorem for multiscale piecewisedeterministic Markov processes. Stoch Process Appl. 2012;122:2292–318.
 119.
Papanicolaou GC. Asymptotic analysis of transport processes. Bull Am Math Soc. 1975;81:330–92.
 120.
Paulauskas N, Pranevicius M, Pranevicius H, Bukauskas FF. A stochastic fourstate model of contingent gating of gap junction channels containing two “fast” gates sensitive to transjunctional voltage. Biophys J. 2009;96:3936–48.
 121.
Peliti L. Path integral approach to birth–death processes on a lattice. J Phys. 1985;46:1469–83.
 122.
Pinsky MA. Lectures on random evolution. Singapore: World Scientific; 1991.
 123.
Reed MC, Venakides S, Blum JJ. Approximate traveling waves in linear reactionhyperbolic equations. SIAM J Appl Math. 1990;50:167–80.
 124.
Roma DM, O’Flanagan RA, Ruckenstein AE, Sengupta AM. Optimal path to epigenetic switching. Phys Rev E. 2005;71:011902.
 125.
Rook MS, Lu M, Kosik KS. CaMKIIalpha 3′ untranslated regiondirected mRNA translocation in living neurons: visualization by GFP linkage. J Neurosci. 2000;20:6385–93.
 126.
Rumsey CC, Abbott LF. Synaptic democracy in active dendrites. J Neurophysiol. 2006;96:2307–18.
 127.
Saez JC, Berthoud VM, Branes MC, Martinez AD, Beyer EC. Plasma membrane channels formed by connexins: their regulation and functions. Physiol Rev. 2003;83:1359–400.
 128.
Sasai M, Wolynes PG. Stochastic gene expression as a manybody problem. Proc Natl Acad Sci. 2003;100:2374–9.
 129.
Schnitzer M, Visscher K, Block S. Force production by single kinesin motors. Nat Cell Biol. 2000;2:718–23.
 130.
Schuss Z. Theory and applications of stochastic processes: an analytical approach. Applied mathematical sciences. vol. 170. New York: Springer; 2010.
 131.
Smiley MW, Proulx SR. Gene expression dynamics in randomly varying environments. J Math Biol. 2010;61:231–51.
 132.
Smith GD. Modeling the stochastic gating of ion channels. In: Fall C, Marland ES, Wagner JM, Tyson JJ, editors. Computational cell biology. Chap. 11. New York: Springer; 2002.
 133.
Steward O, Schuman EM. Protein synthesis at synaptic sites on dendrites. Annu Rev Neurosci. 2001;24:299–325.
 134.
Telley IA, Bieling P, Surrey T. Obstacles on the microtubule reduce the processivity of kinesin1 in a minimal in vitro system and in cell extract. Biophys J. 2009;96:3341–53.
 135.
Teramae JN, Nakao H, Ermentrout GB. Stochastic phase reduction for a general class of noisy limit cycle oscillators. Phys Rev Lett. 2009;102:194102.
 136.
Teramae JN, Tanaka D. Robustness of the noiseinduced phase synchronization in a general class of limit cycle oscillators. Phys Rev Lett. 2004;93:204103.
 137.
Thomas PJ, Lindner B. Asymptotic phase for stochastic oscillators. Phys Rev Lett. 2014;113:254101.
 138.
Touchette H. The large deviation approach to statistical mechanics. Phys Rep. 2009;478:1–69.
 139.
Triller A, Choquet D. Surface trafficking of receptors between synaptic and extrasynaptic membranes: and yet they do move! Trends Neurosci. 2005;28:133–9.
 140.
Vershinin M, Carter BC, Razafsky DS, King SJ, Gross SP. Multiplemotor based transport and its regulation by tau. Proc Natl Acad Sci USA. 2007;104:87–92.
 141.
Visscher K, Schnitzer M, Block S. Single kinesin molecules studied with a molecular force clamp. Nature. 1999;400:184–9.
 142.
Wang LCL, Ho D. Rapid movement of axonal neurofilaments interrupted by prolonged pauses. Nat Cell Biol. 2000;2:137–41.
 143.
Weber MF, Frey E. Master equations and the theory of stochastic path integrals. Rep Prog Phys. 2017;80:046601.
 144.
Welte MA. Bidirectional transport along microtubules. Curr Biol. 2004;14:525–37.
 145.
White JA, Budde T, Kay AR. A bifurcation analysis of neuronal subthreshold oscillations. Biophys J. 1995;69:1203–17.
 146.
White JA, Rubinstein JT, Kay AR. Channel noise in neurons. Trends Neurosci. 2000;23:131–7.
 147.
Wilson D, Moehlis J. Isostable reduction of periodic orbits. Phys Rev E. 2016;94:052213.
 148.
Wong MY, Zhou C, Shakiryanova D, Lloyd TE, Deitcher DL, Levitan ES. Neuropeptide delivery to synapses by longrange vesicle circulation and sporadic capture. Cell. 2012;148:1029–38.
 149.
Yoshimura, K, Ara, K. Phase reduction of stochastic limit cycle oscillators. Phys Rev Lett. 2008;101:154101.
 150.
Zeiser S, Franz U, Wittich O, Liebscher V. Simulation of genetic networks modelled by piecewise deterministic Markov processes. IET Syst Biol. 2008;2:113–35.
 151.
Zhou T, Zhang J, Yuan Z, Chen L. Synchronization of genetic oscillators. Chaos. 2008;18:037126.
 152.
Zmurchok C, Small T, Ward M, EdelsteinKeshet L. Application of quasisteady state methods to nonlinear models of intracellular transport by molecular motors. Bull Math Biol. 2017;79:1923–78.
Acknowledgements
Not applicable.
Availability of data and materials
Data sharing not applicable to this paper as no datasets were generated or analyzed during the current study.
Funding
PCB and JNM were supported by the National Science Foundation (DMS1613048).
Author information
Affiliations
Contributions
Sections 1–5 are based on tutorial lectures PCB gave at ICNMS (2017). Section 6 is a review of recent research by PCB and JMN. Both authors read and approved the final manuscript.
Corresponding author
Correspondence to Paul C. Bressloff.
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI